Speedtest Test Methodology v1.6.1 04.26.2021

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

How Does Speedtest Work?

A Detailed Explanation of Ookla’s Testing Methodology

Version 1.6.1

Last updated: April 26, 2021

Ookla Proprietary and Confidential: this document may not be shared or distributed externally.
Table of Contents
1 Changelog

2 Overview of the Speedtest Application


2.1 Application Overview
2.2 Speedtest Applications
2.3 Application Architecture
2.4 History and legacy products
2.4.1 Legacy Applications
2.4.2 HTTP to TCP protocol
2.4.3 Timeline

3 Server Network
3.1 Function of a Server
3.2 Speedtest Server Network
3.3 Speedtest Custom Servers
3.4 Adding a Server to the Speedtest Server Network
3.5 Monitoring

4 Features of a Speedtest
4.1 Overview
4.2 Configuration
4.3 Hostname Resolution
4.4 Automatic Server Selection
4.4.1 Overview
4.4.2 Manual Selection
4.4.3 Generating the Server List
4.4.3.1 By Distance
4.4.3.2 By Performance
4.4.3.3 On-net Server
4.4.4 Performing Latency tests
4.4.5 Selected Server
4.5 IP Addresses
4.6 Speedtest Stages
4.6.1 Command Protocol
4.6.1.1 Overview
4.6.1.2 Commands and Responses
4.6.1.3 Obfuscation and encryption
4.6.1.4 HTTP fallback

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

2
4.6.2 Test Result Identifiers and Share URLs
4.6.3 Latency and Jitter
4.6.3.1 Latency
4.6.3.2 Jitter
4.6.4 Download and Upload Bandwidth
4.6.4.1 Overview
4.6.4.2 Single or Multi-Server Download
4.6.4.3 Final Bandwidth Algorithm
4.6.4.4 Intermediate Bandwidth
4.6.4.5 Test Duration
4.6.4.6 Single or Multiple Connections
4.6.4.7 Connection Scaling
4.6.4.8 Stable Stop
4.6.4.9 Web Transport: XHR and WebSockets
4.6.5 Packet Loss
4.6.6 Traceroute
4.7 Client information
4.8 Device information
4.9 Conditions at Test Start and Stop
4.9.1 Location
4.9.2 Network
4.9.3 Telephony
4.9.4 Sensors
4.10 Partial tests
4.11 Background Signal Scanning
4.12 Result Submission

5 Features of a Video Test


5.1 Overview
5.2 Video Player
5.3 Video Content
5.4 Maximum Device Resolution
5.5 Logical Flow
5.6 Adaptive Bitrate Stage
5.7 Fixed Bitrate Stages
5.8 Test failures
5.9 Test results

6 Appendices
6.1 Test Parameters

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

3
6.1.1 General Parameters
6.1.2 Automatic Server Selection
6.1.3 Latency
6.1.4 Download
6.1.5 Upload
6.1.6 Stable Stop
6.1.7 Packet Loss
6.1.8 Video Tests

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

4
1 Changelog

Date Version Change

1.0 2019-01-20 First version.

1.1 2019-01-25 Fixed minor test parameters errors.

1.2 2020-04-28 Added Speedtest CLI.


Updated description of Upload behavior to reflect that throughput is
measured on the server.
Added description of Single and Multiple Connection tests.
Added description of connection approach for Latency test for Web
clients.
Updated description of Stable Stop behavior.
Fixed minor test parameters errors.

1.3 2020-06-30 Updated platform name from OSX to macOS


Added description of multi-server download stage

1.4 2020-10-19 Added reference to "goodput" to download and upload tests


Updated server selection parameters.

1.5 2020-12-18 Updated explanation of dynamic connection scaling


Added parameters for mobile multi-server

1.6 2021-02-17 Updated explanation of dynamic connection scaling


Added video

1.6.1 2021-04-26 Updated history with multi-server release dates


Updated test parameters for video tests

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

5
2 Overview of the Speedtest Application

2.1 Application Overview


Speedtest is an application that measures the latency, jitter, download and upload bandwidth of
the network connection between a device and one or several Speedtest servers. On mobile
platforms, it can also measure the performance of streaming video.The Speedtest application is
available on numerous platforms, including on the web, on mobile phones and tablets, on
desktop computers, TVs, and routers. Where applicable, packet loss is measured, bidirectional
traceroutes are executed, and a variety of device and network information is collected.

Measuring Internet performance continues to be meaningful for both consumers and service
providers. Each time a user initiates a test, a snapshot of the Internet is captured for the given
time, place, device and network. When aggregated, these individual measurements represent
the typical Internet performance, including information about locations, times, service providers,
and devices.

A test taken with Speedtest measures the characteristics of the communication network
between a device and one or several servers. Each link and each node through which data is
transferred can affect the final measurements. Typically, the link with the most constraining
characteristics (highest latency, lowest bandwidth or highest packet loss) will limit the final
measurements. Therefore, the fewer links between a device and a server, the more relevant the
measurement is to qualifying and understanding the networking capability of a particular device.

The Speedtest applications leverage a vast testing infrastructure with over 11,000 servers in
more than 190 countries. By having multiple servers in every country, and major city, Speedtest
ensures an accurate and meaningful view of networking performance by minimizing the number
of data links between device and server.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

6
A typical Speedtest measurement is composed of the following steps:

1. A user starts a Speedtest application. The application retrieves up-to-date configuration


elements, such as the list of available servers.
2. The application automatically selects the most ideal test servers, based on the latency,
availability and performance of servers close to the device's location (See 4.4 Automatic
Server Selection).
3. When a user initiates a test, the application interacts with one or several servers to
execute a series of stages. Each stage measures one aspect of the network connection.
4. The application collects these results and sends them to Ookla.
5. The server also collects results for each test and sends them to Ookla.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

7
2.2 Speedtest Applications
The following table contains the list of available applications:

Application Description Location

Speedtest.net Consumer facing web speedtest.net


application

Chrome Extension Consumer facing Chrome speedtest.net/apps/chrome


browser plugin

Speedtest Custom Customizable web application ookla.com/speedtest-custom

Speedtest for Android Android application, for phones, speedtest.net/apps/android


tablets, and similar devices Also available in the following app
stores:
● Google Play Domestic
● Google Play International
● Amazon
● Yandex
● Samsung
● Huawei App Gallery

Speedtest for iOS iPhone and iPad speedtest.net/apps/ios

Speedtest for macOS Mac desktops and laptops speedtest.net/apps/mac

Speedtest for Windows Windows desktops and laptops speedtest.net/apps/windows


Available via the App Store and as
standalone MSI for Windows 7

Speedtest for Apple TV Apple TV speedtest.net/apps/appletv

Speedtest Powered Command line application, for Released privately to partners


commercial use, embedded in
routers and devices

Speedtest CLI Command line client, for speedtest.net/apps/cli


non-commercial use

Speedtest for Windows Older version, published in Microsoft App Store


Phone 2013. Still functional yet no
longer promoted or maintained

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

8
2.3 Application Architecture
Speedtest clients fall into two categories based on the platform they run on:

● Web: all web applications are implemented in JavaScript and HTML5. The test engine is
implemented in JavaScript and shared by all web applications.
● Native: all native applications (mobile, desktop, TVs and embedded) are implemented in
the native language for that platform. The test engine is implemented in C++ and shared
by all applications.

Certain features are unavailable due to inherent limitations of the platform. The following table
summarizes those limitations:

Application Application Engine code Packet loss or Queued results Background


code traceroute

speedtest.net JavaScript and JavaScript No UDP No persistent Not available


HTML5 available storage
available

Chrome JavaScript and JavaScript No UDP No persistent Not available


Extension HTML5 available storage
available

Speedtest JavaScript and JavaScript No UDP No persistent Not available


Custom HTML5 available storage
available

Android Java C++ Yes Yes Yes

iOS Objective C, C++ Yes Yes Not available


Swift

Apple TV Objective C, C++ Yes Yes Not available


Swift

macOS Objective C, C++ Yes Yes Not


Swift implemented

Windows C# C++ Yes Yes Not


implemented

Powered C, C++ C++ Packet loss only Not Not


implemented implemented

CLI C, C++ C++ Packet loss only Not Not


implemented implemented

When differences between web and native implementations exist, they will be noted in the rest
of the document.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

9
2.4 History and legacy products

2.4.1 Legacy Applications


Earlier versions of Speedtest for the web were built using Flash, the last version of which is
available at legacy.speedtest.net.

An earlier customizable Speedtest product called NetGauge was built using Flash. This has
been replaced by the JavaScript/HTML5 Speedtest Custom solution.

A legacy version of Speedtest for Windows Phone is available at the Windows Phone store.
This version should not be confused with the Windows native client.

2.4.2 HTTP to TCP protocol


The first implementation of Speedtest required only a standard HTTP web server (such as
Apache or IIS) as a test server endpoint. To measure download bandwidth, the application
made HTTP requests to retrieve resources of various sizes. To measure upload, the application
sent data to a CGI resource on the web server, which returned the size of the submitted
payload. Latency was also measured by making an HTTP request to a resource of 10 bytes in
size. The first implementations of the mobile applications (Android, iOS and Windows Phone)
were also implemented using this HTTP protocol.

While it simplified the deployment of test servers, this approach revealed inconsistencies in the
performance of various web servers. It also made it impossible to avoid the overhead of the
HTTP protocol or use protocols other than TCP. For this reason, Ookla developed and
implemented a proprietary Speedtest server, supporting both TCP and UDP, which is now the
default mechanism used by all applications.

The current versions of the mobile and desktop applications still contain support for these earlier
HTTP implementations. If a Speedtest Server cannot be reached (for example, if a network has
an outbound firewall blocking every port other than 80 and 443), the application will fallback to
an HTTP test. The protocol (TCP or HTTP) used for the test is stored along with the result.
Server side support for legacy HTTP testing was also added to our server software.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

10
2.4.3 Timeline
The following dates are important milestones for Speedtest:

April 2006 Ookla is founded

August 2006 Speedtest.net Flash is released

January 2009 Speedtest for iOS is released

November 2009 Speedtest for Android is released

November 2012 Speedtest.net Flash with TCP tests is released

January 2013 Speedtest for Windows Phone released

April 2013 Speedtest.net Flash generates a majority of TCP tests

November 2013 Speedtest Powered is released

April 2014 Speedtest.net JS/HTML5 is released

September 2014 Speedtest for iOS released with C++ engine, generating TCP tests

September 2014 Speedtest for Android released with C++ engine, generating TCP tests

October 2015 Speedtest for macOS is released

October 2015 Speedtest for Apple TV is released

October 2015 Speedtest for Windows is released

December 2015 Speedtest.net JS/HTML5 is released at beta.speedtest.net

March 2017 Speedtest Custom is released

January 2018 Speedtest.net JS/HTML5 is released at www.speedtest.net


Speedtest.net Flash is released at legacy.speedtest.net

October 2019 Speedtest CLI is released

July 2020 Speedtest.net, macOS and Windows use multi-server download

January 2021 Video testing in Speedtest iOS is released

April 2021 Speedtest for Android and iOS use multi-server download

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

11
3 Server Network

3.1 Function of a Server


A Speedtest Server (a.k.a. a Speedtest Daemon, an Ookla Daemon or an OoklaServer) is a
running instance of our proprietary software that acts as the test endpoint to any Speedtest
application. On startup, a Speedtest application first obtains a list of active servers and
automatically selects one or several servers (see Automatic Server Selection) so that a user can
simply hit “Go” to initiate a test to an adequate test endpoint. Alternatively, a user can manually
select a server of their choice.

A Speedtest Server has the following capabilities:

● listens to list of TCP and UDP ports (8080 and 5060, by default)
● responds to a command-response protocol to perform all stages of a test (see Command
Protocol)
● submits results of some stages (e.g. traceroute), obtained from the server's perspective
● obtains and renews a TLS certificate
● periodically determines whether a new version of itself is available
● downloads new versions and updates itself

The accuracy and high-quality performance of Speedtest is made possible through the servers
around the world that run a Speedtest Server. The robust network of servers enables us to
ensure that our users get local readings wherever they are on the planet.

3.2 Speedtest Server Network


The Speedtest Server Network consists of thousands of servers (11,000+) sponsored by third
party organizations in 190 different countries. These servers are used by the Speedtest
applications to provide end users with a local testing server so they can get the most accurate
results possible.

A Speedtest measures the characteristics of the communication network between a device and
a server. Each link and each node through which data is transferred can affect the final
measurements. Typically, the link with the most constraining characteristics (highest latency,
highest packet loss, lowest bandwidth) will limit the final measurements. Therefore, the fewer
links between a device and a server, the more relevant the measurement is to qualifying and
understanding the networking capability of a particular device. By having multiple servers in
every country, and major city, and on a wide variety of networks, Speedtest ensures an accurate

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

12
and meaningful view of networking performance by minimizing the number of data links
between device and server.

Currently the Speedtest Server supports Linux, Windows, FreeBSD, and macOS platforms.

3.3 Speedtest Custom Servers


Speedtest Custom gives customers the option of using servers from the Speedtest Server
Network or of using dedicated servers. Dedicated servers are only available to Speedtest
Custom and not available publicly for use by other Speedtest applications.

3.4 Adding a Server to the Speedtest Server Network


Servers may be added to the Speedtest Server Network via the following methods:

● by following instructions at ookla.com/host


● by selecting "Add Server" at account.ookla.com/servers
● by checking the “allow public use” checkbox when adding a Speedtest Custom server at
account.speedtestcustom.com

Server owners with many servers (5 or more) are offered the option of sending a CSV file with
their server information to be bulk inserted.

Servers require high performance hardware and high performing network. Specific
requirements are available at https://support.ookla.com/hc/en-us/articles/234578628.

Servers are reviewed for acceptance, typically within 48 hours, and the server is either enabled
or rejected until the server submitted can pass any outstanding qualification criteria. In order to
qualify for inclusion on the Speedtest Server network, a server must have valid location
information including place name and specific latitude and longitude, pass all necessary server
tests, and display acceptable sponsor information.

In addition to running a Speedtest Server on dedicated hardware, Ookla requires that owners
also setup a web server to handle legacy HTTP tests, as described in HTTP to TCP protocol.

3.5 Monitoring
Servers are continuously monitored for functionality and performance.

For functionality, a server is tested for TCP connectivity and correct response for both line
protocol (see Commands and Responses) and WebSocket protocol (see Web transport: XHR
and WebSockets).
Version: 1.6.1 (last updated 2021-04-26)
Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

13
For performance, a server is tested by comparing the top 5% of test results over the past 30
days to that of other servers in the region. Both download and upload are included in the
analysis, with upload weighted at half importance of download. Regions are defined by a
rounded latitude/longitude integer which groups servers into approximately 100 km2 square
sections. There are some limitations to this grouping methodology which will be addressed in
the future.

If a server does not meet the necessary requirement, it is marked non-functional, which
removes it from the list of servers made available to applications. If a server is disabled for
more than 24 hours, the owner will receive an automated email. A follow-up email will be sent
when the server is enabled again after an outage. If a server has been disabled for more than
30 days, it is considered non-functional. At this point it may be permanently removed from the
Speedtest network and no further emails will be sent to the owner

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

14
4 Features of a Speedtest

4.1 Overview
The following diagram represents the general flow of the application. Each action block is
described in detail in the following chapters:

4.2 Configuration
On startup, the application makes a request to retrieve data elements used to configure and
operate the application.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

15
The request contains a variety of query parameters that identify the application, the device, the
user, the location and the network. For web applications, all configuration parameters except for
servers are retrieved along with the application, when the user loads the web application. The
list of servers is retrieved separately.

The important data elements contained in a configuration response are:

● parameters of test stages: test duration, scaling factors, initial and maximum number of
connections, number of latency tests used for server selection and number of packets
used for packet loss. See Test Parameters for a list of current values
● servers: list of available servers, sorted by distance to the application
● device: IP address, ISP of the device and latitude and longitude of the device (based on
the IP address, as reported by MaxMind's geoip service)

For native applications, the application requests a new configuration upon detecting any
changes to the network state (e.g. switching from cellular to Wifi, enabling the network from
airplane mode). Additionally, if the device moves sufficiently far, a new configuration request is
initiated. If a test is in progress when receiving a configuration response, the new values will be
used once the test is complete.

4.3 Hostname Resolution


When an application connects to a server, the fully qualified domain name of the server is
resolved to an IP address using the following steps:

● request both A and AAAA records from DNS using POSIX getaddrinfo. We request
addresses using AI_ADDRCONFIG, which returns addresses of a given type (IPv4 or IPv6)
only if the local system has at least one address configured of that type, excluding the
loopback interface.
● iterate through the list of IP addresses, in the order returned by the OS, and attempt to
connect to each one. The application makes 3 connection attempts, to account for
intermittent failures, before moving to the next address. We protect against the scenario
where an IPv6 address is returned but the device is unable to reach it, so that the
application will not start a test with an unreachable IP address. Duplicate IP addresses
that differ only by the protocol (TCP, UDP or raw) are ignored.
● the first address that successfully connects is used for all subsequent operations to that
server

If the DNS lookup response only includes AAAA / IPv6 addresses and the device lacks
working IPv6 connectivity, the test will fail. Thus, to ensure the best possible client
compatibility both A and AAAA records should be provided for the host.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

16
4.4 Automatic Server Selection

4.4.1 Overview
Speedtest applications determine the most favorable servers to test to. The goal is to determine
one or several servers that will yield the most accurate result. This means representing the best
performance, in both latency and bandwidth, of the connection of the device to the Internet.

The automatic server selection process adheres to the following steps:

1. the configuration step returns a list of at least 10 servers, based on the device location
2. the application performs latency tests to a each server
3. the servers are sorted by increasing latency
4. the server with the lowest latency is selected for the test. For a multi-server download
test, the 4 servers with the 4 lowest latencies are used (see Single or Multi-Server
Download).

4.4.2 Manual Selection


Users can also manually select a server from the server list if they are intending to test
performance of a specific server. In this case, a list of servers will be presented sorted by
proximity to the application. Most applications retrieve 100 servers or more to populate the
initial server list for manual selection. Additional servers can be selected via search.

4.4.3 Generating the Server List


The server list is generated in three steps:

● servers are sorted by distance to the device


● servers with the same distance are then sorted by performance rank
● if available, an "on-net" server is added to the list

4.4.3.1 By Distance

For every server in our network, Ookla stores and maintains a location latitude and longitude.
This location is estimated, oftentimes simply by using the geographical center of the city in
which the server is known to reside.

If available to the application, the location of the device is sent along with the configuration
request. If precise device location is not available, the configuration service uses MaxMind to
determine the geographic location of the IP address of the request.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

17
Given a location, a list of functional servers is generated, sorted by distance.

4.4.3.2 By Performance

For every server in our network, a performance indicator is computed to represent the likelihood
that the server will perform adequately. This rank is computed daily by taking the following
parameters into account:

● uptime: servers with higher uptime rank higher


● performance: servers with higher download and upload numbers rank higher
● age: older servers rank higher

Since the granularity of the server location data is fairly low, it is not uncommon for many
servers to have the exact same distance to the device. Servers with the same distance are then
sorted by rank.

4.4.3.3 On-net Server

For select servers in our network, we store and maintain the ISP that hosts the server.

The configuration service determines the ISP of a device by using MaxMind.

The service looks for an existing server hosted by the same ISP as the ISP of the device, within
3,000 miles of the device location, and within the same country. If we find such a server,
considered to be "on-net" (on the same network as the device), and it is not already part of the
list, it is added to the list of servers returned to the application.

4.4.4 Performing Latency tests


After receiving the list of servers, the application performs a latency test on the top 10 servers (5
on certain platforms), including the optional on-net server. These measurements are obtained
by performing a standard latency test, identical to that described in Latency.

Some platforms perform these tests in parallel, others in sequence. See Test Parameters for
further details on the differences by platform.

4.4.5 Selected Server


The servers are sorted by latency, in increasing order (servers with lowest latency first, servers
with the highest latency are last).

When the download stage uses a single server, the server with the lowest latency is used.
When the download stage uses multiple servers (see Single or Multi-Server Download), the 4
servers with the lowest latency are used.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

18
All other stages (latency, jitter, upload, packet loss, traceroute) use the server with the lowest
latency. This server is also the server shown to the end user during the test.

4.5 IP Addresses
We collect three different IP addresses associated with an application during the test. We use
the following naming convention to distinguish them:

● Public: the IP address of the device, as seen by the speedtest.net infrastructure


● Test: the IP address of the device, as seen by the Speedtest server
● Private: the IP address associated with the interface of the device

Every IP address is determined using a different method:

● Public: this is the remote address of the request used to submit the result to
speedtest.net. When submitting results that have been queued (for partial tests for
example), this IP address might not be the address used during the test
● Test: During the latency stage, the application makes a request to the server to retrieve
the remote address of the application, as seen by the server, using the GETIP command
(see Commands and Responses)
● Private: During the test, the application uses networking APIs to determine the IP
address associated with the interface used for the test. Due to networking address
translations or proxies, this address is most likely not the public address of the device.
This address is collected both at the beginning of the test and at the end of the notice
(see Network)

Every IP address can be either IPv4, IPv6 or both.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

19
Not all applications are currently collecting all types of IP addresses. The following table list the
current status of IP address collection, by type and platform:

Application Public Test Private

Web IPv4, IPv6 Not yet implemented Not accessible from a


browser

Android IPv4, IPv6 IPv4, IPv6 IPv4, IPv6

iOS, Apple TV IPv4, IPv6 IPv4, IPv6 IPv4, IPv6

macOS IPv4, IPv6 IPv4, IPv6 IPv4, IPv6

Windows IPv4, IPv6 IPv4, IPv6 IPv4, IPv6

Powered IPv4, IPv6 IPv4, IPv6 Not yet implemented

CLI IPv4, IPv6 IPv4, IPv6 Not yet implemented

Using the Public and Test IP addresses, we can determine whether the test was taken over IPv4
or IPv6, reflected in the test_method_a field, available in extracts.

4.6 Speedtest Stages

4.6.1 Command Protocol

4.6.1.1 Overview

A Speedtest Server implements a command-response protocol used to initiate various stages of


a test and retrieve associated results. Each command consists of a keyword followed by
parameters, followed by a newline character ('\n' or 0x0a). The response can be either a single
or multiple line response with the relevant data, or a stream of data used to calculate bandwidth.

The simplest way to understand this protocol is to use telnet or nc to an existing server:

$ telnet sea.host.speedtest.net 8080


Trying 172.98.86.2...
Connected to sea.host.speedtest.net.
Escape character is '^]'.
HI
HELLO 2.6 (2.6.2) 2018-08-15.1839.2a845ad
^]
telnet> q
Connection closed.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

20
or using nc:

$ echo HI | nc sea.host.speedtest.net 8080


HELLO 2.6 (2.6.2) 2018-08-15.1839.2a845ad

4.6.1.2 Commands and Responses

The following table provides a list of commands, their general usage and their responses:

Command Function Response

HI [<guid>] Confirm that the service is an HELLO <x.y> (<x.y.z>) <build>


Speedtest server, obtain the
specific version, and optionally
establish the unique test identifier
for this session

GETIP Ask the server for the remote YOURIP <ip_address>


address of the connected socket,
to obtain the IP address of the
device, as seen by the server

PING <time> Initiate a latency test PONG <time>

DOWNLOAD <size> Initiate a download data stream DOWNLOAD <data steam...>

UPLOAD <size> Inform the server to read size OK <size> <time>


bytes of data

UPTIME Retrieves the duration for which UPTIME <time>


the server has been operating

INITPLOSS Resets packet loss counters for None


this IP address

LOSS <uniqueId> Send a UDP packet with a unique None


<packetCounter> <guid> ID for the packet, a count of the
number of packet sent, and the
session identifier

PLOSS Request packet loss report PLOSS <received> <dups>

4.6.1.3 Obfuscation and encryption

It is crucial that Speedtest measurements be representative of typical user experience on a


given network. For this reason, Ookla requires that Speedtest traffic not be favorably prioritized.

As preventative measures, Ookla periodically alters the protocol to ensure that Speedtest traffic
is indistinguishable from other applications or browser traffic and monitors aggregate test result
data for indications of traffic prioritization. In native applications, the command and control layer
is obfuscated and/or encrypted. On the web, the entirety of the test is conducted over HTTPS.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

21
4.6.1.4 HTTP fallback

If a Speedtest Server cannot be reached (for example, if a network has an outbound firewall
blocking every port other than 80 and 443), the application will fallback to an HTTP test. The
protocol (TCP or HTTP) used for the test is stored along with the result.

4.6.2 Test Result Identifiers and Share URLs


A test result is uniquely identified by the combination of the platform and a identifier unique to
that platform:

( platform, id )

Historically, test result identifiers have been a numerical value. More recent applications have
switched to using a universally unique identifier as the result identifier (see UUID). When
establishing a connection to a server, the application provides the unique identifier of the testing
session using the handshake HI command. All applications will transition to utilizing UUIDs as
their primary test identifier in the future.

The following table lists which platform use what type of identifier:

Platform Type Share URL

web numeric /result/<id>

Android numeric /result/a/<id>

iOS numeric /result/i/<id>

macOS uuid /result/d/<uuid>

Windows uuid /result/d/<uuid>

Powered numeric no share URL

CLI uuid /result/c/<uuid>

4.6.3 Latency and Jitter


Latency tests (a.k.a. ping tests) measure the round-trip time for communications between the
application and server, over a single TCP connection. Jitter is a measure of the variance of
latency, calculated entirely from the results of multiple latency tests.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

22
4.6.3.1 Latency

The application establishes a TCP connection to the server, and exchanges some initial
handshake traffic to ensure the connection is functional and minimally warmed up.

On tests from web clients, multiple methods are used to establish a connection, using different
combinations of protocol and port. If the connection cannot be established within a so-called
"soft timeout period" (see Test Parameters for the current values), the application will attempt
connecting using the next protocol and port combination, leaving the previous connections
open, until any connection succeeds. The succeeding protocol and port combination is then
used for the Latency stage. The protocol and port combinations used in sequence are:

Order Protocol Port

1 WebSockets 8080 (or default server port)

2 WebSockets 5060 (or alternate server port)

3 HTTP 8080 (or default server port)

4 HTTP 5060 (or alternate server port)

A single measurement is taken by measuring the time between sending a message and
receiving a response:

● The current time in microseconds is measured, as start.


● The application sends PING <current time in micros> followed by a newline. This
message is 22 bytes long.
● The application performs a blocking read on the socket until data is available.
● The server responds with PONG <current time in micros> followed by a newline.
This message is 22 bytes long.
● The current time in microseconds in noted, as stop
● The elapsed time between the start and stop is calculated. This is the latency value
for this measurement. Note that the current time contained in the PING and PONG
messages is not used.

The latency test stage repeats this process N times. The minimum value of all values is
recorded as the final latency measure. Historically, we have observed that this TCP-based
method overreports latency, in comparison to a real ICMP PING. It is for this reason we chose to
use the minimum rather than an average.

See Test Parameters for the current values of N by platform.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

23
4.6.3.2 Jitter

Jitter is a measurement of variation in latency. A lower jitter value indicates more consistency in
latency measurements, while a higher jitter value indicates more variance.

We calculate jitter as the absolute difference between consecutive latency samples, divided by
the number of samples minus one.

Given n latency samples in the array latency[]:

jitter = sum( abs ( latency[i] - latency[i-1] ) ) / (n-1)

We chose to implement the methodology selected by a popular tool at the time, namely
PingPlotter Pro. Different methods for calculating jitter are discussed here.

4.6.4 Download and Upload Bandwidth

4.6.4.1 Overview

Download and upload tests are both throughput tests.

The application transfers as much data as possible in order to saturate the network connection
and measure its maximum capacity.

The maximum throughput of a connectivity service defines the user experience of many typical
activities performed on the Internet: downloading and uploading files, browsing websites (which
downloads web pages and their associated media), playing video games, (down) streaming
music or video from a provider or live-streaming video or a gaming experience (up) to a
provider.

Historically, the so-called "last mile" connection, whether fixed or mobile broadband, has been
the bottleneck link, most limiting of the available bandwidth. Service providers have
differentiated their products by offering higher and higher maximum throughputs. Even as the
last-mile throughput increases to the point of no longer being the bottleneck for most typical
consumer activities, the maximum throughput remains extremely relevant both as a
troubleshooting tool and as a service-level assurance tool to confirm that the service is
functioning properly. Thus, measuring the maximum throughput has proved most valuable to
both consumers and service providers.

During a download test, the application requests data from one or several servers and
measures the amount of bytes received per unit of time. During an upload test, the application
sends data to a single server and the server measures and reports back the amount of bytes
per unit of time received. On Web and Mobile clients, the user has the option to test using a
single TCP connection or using multiple TCP connections. In all other platforms, the application

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

24
always uses multiple TCP connections. As data is transferred, the application aggregates the
number of bytes transferred on all connections.

The Speedtest applications measure "goodput", which is the application-level throughput of a


communication or the number of useful information bits delivered by the network to a certain
application per unit of time. The amount of data considered excludes protocol overhead bits as well
as retransmitted data packets.

4.6.4.2 Single or Multi-Server Download

Higher capacity connectivity services (such as fiber or 5G) require higher capacity servers in
order to generate enough traffic to saturate an end-user's connection. Peering relationships or
cross-connectivity between providers can also be a bottleneck. Both of those factors contribute
to making the performance and location of the selected server extremely significant. For
example, service providers care about the on-net or off-net location of the selected server. To
mitigate these issues, the application can now use more than one server in parallel to generate
sufficient traffic to saturate the end-user's connection.

If X is the theoretical bottleneck bandwidth between the end-user and the Internet, when using a
single server, that server is responsible for generating the entirety of the bandwidth X. When N
servers are used, each server is now responsible for generating X/N bandwidth. In essence, the
traffic generation load is distributed across N servers, which mitigates negative impacts from any
one server or path to that server.

In addition, the pool of servers used by multi-server tests is slightly larger than when compared
to single-server tests. Since we avoid a "winner takes all" approach, we effectively distribute the
traffic generation to more servers.

In a single connection test (see Single or Multiple Connections), the server with the lowest
latency is used. When a user manually selects a server (see Manual Selection), the application
uses that server only.

In a multiple connection test, the 4 servers with the lowest latency are used (see Automatic
Server Selection). The total number of initial connections is 4 per server utilized, or 4 for a single
server test and 16 for a multi-server test (see Test Parameters). (macOS, Windows and Web
start with 4 connections total, regardless of the number of servers used. We anticipate those
platforms will eventually align to using 4 connections per server, using the same methodology
as mobile).

All connections are opened at the start of the download stage. Connection scaling uses the
overall bandwidth and the total number of connections regardless of the server. The estimated
window size used is divided by the number of the servers, so that the connection factor is
inversely proportional to the number of servers used. This ensures that each server receives as
many connections as it would in a single server test. (macOS and Windows use an estimated
window size of 100k, regardless of the number of servers used. We anticipate those platforms

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

25
will eventually use a window size divided by the number of servers, using the same
methodology as mobile).

When adding new connections, the server to use is determined by using a round-robin
approach. The bandwidth calculation uses the total number of bytes transferred across all
connections and all servers.

All other aspects of the download stage (test duration, stable stop) are identical between single
and multi-server tests. All other stages (latency, jitter, upload, packet loss, traceroute) use a
single server, the server with the lowest latency or the selected server, if the user manually
selected.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

26
4.6.4.3 Final Bandwidth Algorithm

As the test progresses, the average bytes transferred per time is computed every 750
milliseconds (or every 5% of the total test duration), across all connections and all servers. This
produces 20 samples that are used to compute the final bandwidth.

We have implemented two different methods to calculate a final bandwidth number from a
series of 20 samples:

● Super Speed, used by the Native applications

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

27
● Maximum Sustained Throughput (MST), used by the Web applications

Super Speed

The Super Speed algorithm iterates through ranges of consecutive samples of at least 10
samples (or 50% of the total test duration). For each range, it computes the average value of all
samples in that range. The range with the highest value is reported as the final number.

Maximum Sustained Throughput (MST)

Maximum Sustained Throughput (MST) is an algorithm designed to counteract the burstiness in


throughput caused by antivirus scanners operating during a Speedtest.

The MST algorithm sorts the samples from highest to lowest value, removes the two highest
samples, and calculates the average over the top ⅔ of the remaining samples.

4.6.4.4 Intermediate Bandwidth

Throughout the test, the final bandwidth (either Super Speed or MST) is continuously
calculated. However, since the final value might differ substantially from the last measured
sample, the bandwidth displayed to the user (via the gauge or on the graphs) is an interpolation
between the last sample value and the final bandwidth.

4.6.4.5 Test Duration

Each throughput test has the following configurable parameters:

● Maximum execution time


● Maximum bytes transferred

The test executes until the maximum execution time is reached or the maximum bytes
transferred is reached, or the stable stop algorithm triggers the end of the test, whichever comes
first.

Note that the max bytes limit is currently being phased out, as it causes high-bandwidth tests to
stop before they reach peak bandwidth.

4.6.4.6 Single or Multiple Connections

The web and mobile applications offer the user the option of testing using a single connection or
using multiple connections. Testing using a single connection simulates the behaviour of a
single file download, while testing using multiple connections more closely represents generic
Internet usage, such as a user browsing web pages, using multiple applications, or multiple
users in a household accessing the Internet simultaneously.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

28
4.6.4.7 Connection Scaling

When testing using multiple connections, the number of connections used during a test scales
as a function of the intermediate bandwidth. This approach is called "connection scaling". When
testing using a single connection, no connection scaling occurs.

All applications start with 4 connections. Every 5% of test completion, the number of desired
connections is calculated by dividing the intermediate bandwidth by a "connection ratio":

number of connections = ( intermediate bandwidth / connection ratio )

When the number of desired connections differs from the number of actual connections, a new
connection is open and used to transfer data. In a multi-server download stage, the next server
is selected from the 4 servers with the lowest latency, using a round-robin approach.

Once the test has reached 50%, no new connections are added.

Two methods have been implemented to compute the connection ratio:

● fixed: the connection ratio value is set to 6 Mbps. This method is currently used by web
applications.
● dynamic: the connection ratio is calculated once, at the beginning of the throughput
stage, as a function of the latency measured during the latency stage. The calculated
connection ratio is then used as described above to determine the number of
connections as the intermediate bandwidth changes. This method is currently used by all
native applications.

Dynamic Connection Scaling

TCP implements two mechanisms to avoid congestion in the network and by the receiver:

● Congestion Window (CWND)


● Receive Window (RWND)

The Congestion Window (CWND) is the primary mechanism by which a sender regulates its
output (in the case of a Download stage, the sender is the server. In the case of an Upload
stage, the sender is the client). This variable is used by the congestion control algorithm of the
sender to implement what is commonly known as "slow start". It is worth noting that this
variable is maintained entirely by the sender, and is not exchanged over the wire, as opposed to
the Receive Window, which is.

The Receive Window (RWND), is used by the recipient to let the sender (the server, in the case
of a Download) know how much data it can send without overwhelming the recipient. This
variable is typically used by the receiver to convey the amount of storage available in its buffers.
Its value is sent back to the sender and can be inspected through a packet capture.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

29
Both mechanisms effectively limit the amount of data per connection that can be transmitted
without having been acknowledged by the receiver. This limit is known as the Bandwidth-Delay
Product (BDP). At any point in time in a TCP connection, the BDP is defined by the smallest of
both values, and, in turn, defines the maximum bandwidth that can be measured on that
connection. It essentially defines an upper bound to the bandwidth per connection:

connection ratio ≤ min( CWND, RWND ) / latency

Since the maximum bandwidth per connection is inversely proportional to the latency, we can
take that value into account when determining the connection ratio. The higher the latency, the
lower the maximum bandwidth per connection is, and thus the more connections we need to
overcome this limitation.

In practice, the connection ratio is an estimate of the average expected throughput achieved by
a single connection over a network, given the measured latency. This formula allows the
application to balance opening too many connections for the link, introducing too much TCP
overhead which prevents measuring the full line rate, with not enough connections to not be
limited by the BDP and overcome the effects of latency. We have chosen an estimated window
size of 100 Kb, which is far lower than most RWND in today's world. As an example, a T1 line
at 1.5 Mbps with 534ms latency requires a window size of 100Kb in order to measure the full
bandwidth (1,500,000 * 0.534 / 8 = 100,125 bytes ).

Using an estimated window size of 100 Kbytes, we obtain the following values for the
connection ratio:
Latency [ms] Connection Ratio [Mbps]

1 800

15 53.3

30 26.7

50 16

For multi-server tests on iOS and Android, we have determined through experimentation and
validation that a connection ratio inversely proportional to the number of servers utilized yields
more accurate numbers. We anticipate that all other platforms will align on this methodology.

4.6.4.8 Stable Stop

In order to reduce the duration of the test and avoid consuming unnecessary data, the
application implements an algorithm designed to end the test early if little to no variance in the
measured bandwidth is noticed.

The application measures the variance in the intermediate bandwidth as the test progresses,
using Moving Average Convergence Divergence (MACD). Average bandwidth samples of
Version: 1.6.1 (last updated 2021-04-26)
Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

30
100ms in duration are computed. Two exponential moving averages are computed: a "slow"
moving average composed of the last 26 samples and a "fast" moving average, composed of
the last 12 samples. When the difference between the two moving averages is within 0.3% of
the intermediary bandwidth, for 26 consecutive samples, then bandwidth measurement is
considered stable and the test ends.

4.6.4.9 Web Transport: XHR and WebSockets

Flash offered unrestricted access to TCP sockets from a browser. With the retirement of Flash
for security reasons, TCP sockets are no longer available in the browser. To emulate TCP
connections from a browser, the Web application uses either XmlHTTPRequest (XHR) or
WebSockets (WS). Tests servers respond identically to commands using either transport
methods.

After extensive performance testing, we have observed significant differences in browsers'


implementation of XHR or WS. Consequently, the Web applications use a transport method that
performs best. The following table describes the transport type selected, as a function of the
test stage and the browser used:

Stage Browser Transport

Latency All WS

Download Firefox WS

All others XHR

Upload IE, Edge WS

All others XHR

4.6.5 Packet Loss


The packet loss stage determines the amount of unexpected events caused in the transmission
of network data over a UDP connection. The application sends packets at a regular interval,
and the server counts the total number of received packets and the number of duplicated
packets received. At the end of the test, the application retrieves those counts from the server.

The "send" phase consists of the following sequence of events:

● Wait until the download stage starts running


● Initiate a handshake with the server over TCP
● Send the server a unique identifier to identify the packet loss session over TCP
● Start sending packets to the server over UDP, each with a small payload of 16 bytes with
a 50ms delay between packets

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

31
● Continue sending packets until the download stage is 75% complete or ended due to an
error

The "retrieval" phase consists of retrieving the number of packets from the server.

● Wait until the upload stage starts running


● Send a command to the server to retrieve the number of packets the server received
based on the unique identifier
● Retry 3 times or if the stage is complete with a small delay in between attempts

In some cases, UDP is blocked or not configured on the server end. This results in 0 packets
received, or 100% packet loss, and is discarded.

4.6.6 Traceroute
A traceroute is a method to discover the network path between a source and a destination
address. The source application builds IP packets for the destination address but with a fixed
time-to-live value, which defines the maximum number of hops this packet can traverse before
considering the host to be unreachable. When the time-to-live value for a packet reaches 0, the
routing node should send back a response containing its own address. By progressively
increasing the time-to-live values, the sender receives error packets identifying each hop in the
path.

The traceroute stage performs 3 distinct traceroutes:

● from the application to the server


● from the application to www.speedtest.net
● from the server to the application

The traceroute stage implements the so-called Paris Traceroute implementations, which
controls packet header contents to ensure that traffic flowing through load balancers uses the
same path for every time-to-live value.

For every time-to-live value, the stage sends 3 packets. For each response packet, the
traceroute provides:

● the IP address of the hop


● the time to reach it
● the return time to live value
● the maximum transmission unit (MTU) of the return path

See Traceroute for an example of a traceroute result.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

32
4.7 Client information
The application captures information about the client and submits it with every test result. The
data elements captured varies by platform, but typically includes unique identifiers.

For a complete list of available fields, see Data Extract Field Definitions.

4.8 Device information


The application captures information about the device and submits it with every test result. The
data elements captured varies by platform, but typically includes:

● platform (android, ios, windows, etc…)


● device
● model
● product
● manufacturer

On Android devices, the application also captures:

● whether the device is rooted or not


● the fingerprint for this device
● the Android API level

For a complete list of available fields, see Data Extract Field Definitions.

4.9 Conditions at Test Start and Stop


Some data elements are subject to change during the test. The applications capture this data at
the start and end of the test.

For a complete list of available fields, see Data Extract Field Definitions.

4.9.1 Location
The application captures information about the device location and submits it with every result.
The data elements captured varies by platform, but typically includes:

● latitude
● longitude
● altitude
Version: 1.6.1 (last updated 2021-04-26)
Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

33
● accuracy
● timezone
● the source of the location provider

On devices with Google services, the application performs a reverse geo lookup to determine
the address given the location. On Android devices, the application also captures location fixes
from all available location providers.

4.9.2 Network
The application captures information about the device connectivity and submits it with every
result. The data elements captured varies by platform, but typically includes:

● connection type
● ISP name
● public and private IP
● network interfaces
● Wi-Fi networks

4.9.3 Telephony
The application captures information about the device telephony and submits it with every
result. The data elements captured varies by platform, but typically includes:

● cell information, including


○ the telephony type (CDMA, GSM, LTE, WCDMA)
○ the cell identity (MCC, MNC, cell id, lac, ...)
○ whether the device is registered with this cell
○ the signal strength (asuLevel, dbm, …)
○ the cell location information
● subscription information (SIMs), including
○ MCC, MNC
○ display and carrier names
○ country

4.9.4 Sensors
The application captures information about the device sensors, if available, and submits it with
every result. The data elements captured varies by platform and by device, but typically
includes:

● battery
● gravity
● humidity

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

34
● light
● linear Acceleration
● magnetic Field
● pressure
● significant Motion
● step
● temperature

4.10 Partial tests


Some applications have the ability to store and submit partial test results. A partial test result
can occur if:

● the user interrupts a test prior to completion


● an error occurs during any of the stages of the test

One common error case is that a mobile device loses connectivity during the test. In order to
capture this result, partial tests are queued on the device and submitted when connectivity is
available.

User interrupted tests simply contain information about the completed stages. Errored tests
contained information about what error caused the test to end prematurely.

4.11 Background Signal Scanning


The Android application periodically collects location, network, telephony and sensor data while
the application is running in the background. The user can enable or disable this feature via the
application's settings.

The process is initiated when one of several Android system events occur. The primary goal is
to only collect samples when the conditions of the device have changed significantly. The data
elements collected are identical to those described in sections Conditions at Test Start and
Stop.

4.12 Result Submission


Results are sent to Ookla immediately after a test is completed.

Partial tests and background signal results are queued on the device and sent to Ookla only
when an unmetered Wi-Fi connection is available.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

35
5 Features of a Video Test

5.1 Overview
The video test consists of twos set of stages:

● an Adaptive Bitrate (ABR) stage, during which the video player controls the displayed
resolution, while the application measures the time spent in various resolutions
● a series of Fixed Bitrate stages during which the video player plays content at a fixed
resolution, progressively increasing the resolution until a timeout on the first frame
occurs, re-buffering exceeds 20% on an individual stage, or the highest resolution is
reached.

The purpose of the ABR stage is to measure the performance of playing video in a typical
consumer scenario, such as a video playing on a web page, or within a social media application.
The purpose of the fixed stages is to determine the maximum characteristics that device and
network connection are capable of, when pushed to perform maximally.

5.2 Video Player


The application makes use of a platform-specific SDK from JWPlayer. See Video Tests for the
specific version.

5.3 Video Content


The source video is a proprietary rendered video containing a visually rich synthetic animation.

The following table describes properties of the source video:

Frames per second 30

Encoding H.264

Duration 16 seconds

The source video has a renditions of its content at the following resolutions:

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

36
Resolution Width Height Encoded video
[pixels] [pixels] bitrate [Mb/s]

240p 426 240 0.6

360p 640 360 1

480p 842 480 2.5

720p 1280 720 5

1080p 1920 1080 8

1440p 2560 1440 16

2160p (4K) 3840 2160 35

Every rendition is chunked into files corresponding to a video duration of 2 seconds.

The renditions follow Apple's HTTP Live Streaming (HLS) specification, defined in RFC 8216.

All video resources are served from a CDN provider. See Section Video Tests for the current
CDN used.

5.4 Maximum Device Resolution


The application calculates the maximum device resolution supported by the screen container,
which is defined as the highest resolution with a width and height smaller than the width and
height of the screen container. The maximum resolution is used during the adaptive bitrate
stage to determine maximum resolution percentage.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

37
5.5 Logical Flow
The application implements the following logical flow diagram:

5.6 Adaptive Bitrate Stage


The application plays the video in adaptive bitrate mode, during which the video player will
manage switching from one resolution to another, using the appropriate rendition to display a
smooth transition.

The application monitors start and stop events due to stalling as well as any changes in
resolution and calculates the following metrics:

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

38
● Time to first frame (timeToFirstFrameMs): the amount of time between the user
tapping play and the first frame being displayed

● Elapsed time (elapsedMs): the amount of time between the first frame being displayed
and the completion of the stage

● Stall time (stallMs): the total amount of time spent re-buffering that stalls playback
after the first frame

● Buffering percentage: the percentage of elapsed time spent re-buffering, defined as:

bufferingPercentage = stallMs / elapsedMs

● Stall ratio (stallRatio): the stall time divided by the amount of time spent playing the
video:

stallRatio = stallMs / ( elapsedMs - stallMs )

● Mean bitrate: the average bitrate observed throughout the stage

● Maximum resolution percentage: the percentage of elapsed time that the video is
displayed in a resolution equal to or higher than the maximum resolution of the device

The application will only advance to the fixed bitrate stages if the maximum resolution
percentage is greater than 80%.

5.7 Fixed Bitrate Stages


During the fixed bitrate stages, the application controls the resolution of the player, starting at
the lowest resolution, and progressively increasing the resolution, until the video timeout or the
re-buffering percentage exceeds 20%, or the highest possible resolution is reached.

During each of those stages, the application calculates the same metrics as during the adaptive
bitrate stage.

5.8 Test failures


A video test may fail as a result of one of the following errors:

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

39
● Configuration failure (NoConfig): the application was unable to load the remote
configuration that defines the test parameters (e.g. video playlist, timeouts, etc.)

● User cancellation (UserCancel): the user cancelled a stage by pressing the “X” icon
which appears within the Speedtest UI

● App backgrounded during test (UserBackground): the user backgrounded the


application (went to lock screen, switched apps, opened settings pane, etc.), which
interrupted playback and ended the test

● Player error during any stage (PlayerError): the video player experienced an error
during playback of the ABR stage

● Video start timeout on ABR stage (StartTimeout): the ABR stage failed to display the
first frame before the start timeout. See Video Tests for current timeout value.

● Video playback timeout on ABR stage (Timeout): the ABR stage failed to play video
for the configured duration before the timeout was reached. See Video Tests for current
timeout value.

Errors are also reported for the ABR stage and each individual fixed stage:

● Player error (PlayerError): the video player experienced an error

● User cancellation (UserCancel): the user cancelled the stage by pressing the “X” icon
which appears within the Speedtest UI

● App backgrounded during test (UserBackground): the user backgrounded the


application (went to lock screen, switched apps, opened settings pane, etc.), which
interrupted playback and ended the test

● Video start timeout (StartTimeout): the stage failed to display the first frame before
the start timeout. See Video Tests for current timeout value.

● Video playback timeout (Timeout): the stage failed to play video for the configured
duration before the timeout was reached. See Video Tests for current timeout value.

5.9 Test results


The metrics displayed at the end of the test are:

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

40
● Maximum Resolution: the highest resolution achieved during the ABR stage or of all
successful fixed stages

● Load Time: the time to first frame of the ABR stage

● Buffering %: the re-buffering percentage of the ABR stage

Additionally, the application makes a determination of whether the maximum resolution reached
is greater or equal to the maximum device resolution for the device.

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

41
6 Appendices

6.1 Test Parameters


The format of these tables is:
Parameter Name Default value
Platform/condition: value

6.1.1 General Parameters


Network timeout 15 seconds
Throughput algorithm Super Speed
Web: MST
Number of samples 20
Web: 30

6.1.2 Automatic Server Selection


Number of servers tested 10 + "on-net"
tvOS: 5+"on-net"
Latency tests in parallel Yes
Number of latency tests 3
Timeout for latency test 1 second
Web: 15 seconds

6.1.3 Latency
10 seconds
Network timeout Web: 20 seconds
Network "soft" timeout (web only) 3 seconds
Number of latency tests 10
3 seconds
New connection timeout Web: 20 seconds
3 seconds
Response timeout Web: 20 seconds

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

42
6.1.4 Download
Timeout 15 seconds
Test duration 15 seconds
Max transfer size per connection unlimited
Number of servers used with specific server 1
Number of servers used with auto-select 4
CLI: 1
Starting connection count 4 x number of servers
Mobile 3G or older: 2
Web: 4
Windows: 4
macOS: 5
Connection Scaling enabled Yes
tvOS: No
Connection scaling factor Dynamic
Web: fixed
Estimated Window Size 100k / number of servers
macOS, Windows: 100k
Web: N/A
Max connection count 16 x number of servers (i.e 64 for multi-server)
Windows, macOS: 22
Web: XHR: 24, WS: 32

6.1.5 Upload
Timeout 15 seconds
Test duration 15 seconds
Max transfer size per connection unlimited
Number of servers used 1
Connection count 2
Web: 4
Connection Scaling Enabled Yes
Connection scaling factor Dynamic
Web: fixed
Max connection count 22
Web: XHR: 6, WS: 32

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

43
6.1.6 Stable Stop
Stable Stop Enabled True
Web, Powered: False
Fast EMA (Exp. Moving Average) 12 samples
Slow EMA (Exp. Moving Average) 26 samples
Stability Window Length 9 samples
Stability Threshold (a.k.a. "Delta") 420
Sampling Frequency 0.1 seconds

6.1.7 Packet Loss


Enabled True
Web: False
Send period 75% of test duration
Interval between sending packets 50 ms

6.1.8 Video Tests


JWPlayer Version 3.18.2
CDN Cloudflare
ABR Video Duration 16 seconds
ABR Start Timeout 30 seconds
ABR Stage Timeout 26 seconds (16 seconds video + 10 seconds timeout)
Fixed Stage Video Duration 5 seconds
Fixed Stage Start Timeout 5 seconds
Fixed Stage Timeout 10 seconds (5 seconds video + 5 seconds timeout)

Version: 1.6.1 (last updated 2021-04-26)


Ookla Proprietary and Confidential: this document may not be shared or distributed externally.

44

You might also like