THE METHODOLOGY OF THE P3 CONNECT MOBILE BENCHMARK IS THE RESULT OF MORE THAN 15 YEARS OF TESTING MOBILE NETWORKS. TODAY, NETWORK TESTS ARE CONDUCTED IN MORE THAN 80 COUNTRIES. OUR METHODOLOGY WAS CAREFULLY DESIGNED TO EVALUATE AND OBJECTIVELY COMPARE THE PERFORMANCE AND SERVICE QUALITY OF MOBILE NETWORKS FROM THE USERS’ PERSPECTIVE.
The P3 connect Mobile Benchmark in the UK comprises of the results of extensive voice and data drivetests and walktests as well as a sophisticated crowdsourcing approach.
DRIVETESTS AND WALKTESTS
The drivetests and walktests in the UK took place from October 30 to December 3, 2018 All samples were collected during the day, between 8.00 a.m. and 10.00 p.m. The network tests covered inner-city, outer metropolitan and suburban areas. Measurements were also taken in smaller towns and on the connecting highways. The four measurement cars together covered about 3,750 kilometres in the cities, about 1,700 km in towns and about 6,300 km on the roads – resulting in a total of 11,750 kilometres. The combination of test areas has been selected to provide representative test results across the UK population. The areas selected for the 2018 test account for approximately 16.6 million people, or roughly 26.2 per cent of the total population of the UK. The drivetests covered 22 cities and 35 towns. Additionally, two teams conducted walktests in 10 cities and also on railway journeys between them. The routes are shown on the map here, all visited cities and towns are listed in the box below.
The four drivetest cars as well as the battery-powered backpacks of the walktest teams were equipped with arrays of Samsung Galaxy S8 smartphones for the simultaneous measurement of voice and data services.
One smartphone per operator in each car was used for the voice tests, setting up test calls from one car to another. The walktest team also carried one smartphone per operator for the voice tests. In this case, the smartphones called a stationary counterpart. The audio quality of the calls was evaluated using the HD-voice capable and ITU standardised POLQA wideband algorithm. All smartphones used for the voice tests were set to VoLTE preferred mode. In networks or areas where this modern 4G-based voice technology was not available, they would perform a fallback to 3G or 2G.
As a new KPI in 2018, we assess the so-called P90 value for call setup times. P90 values specify the threshold in a statistical distribution, below which 90 per cent of the gathered values are ranging. In order to account for typical smartphone use during the voice tests, background data traffic was generated through random injection of small amounts of HTTP traffic. The voice scores account for 34 per cent of the total results.
Data performance was measured by using three more Galaxy S8 in each car or per walktest team – one per operator. Their radio access technology was set to LTE preferred mode.
For the web tests, they accessed web pages according to the widely recognised Alexa ranking In addition, the static Kepler test web page as specified by ETSI (European Telecommunications Standards Institute) was used. In order to test the data service performance, files of 3 MB and 1 MB for download and upload were transferred from or to a test server located on the Internet. In addition, the peak data performance was tested in uplink and downlink directions by assessing the amount of data that was transferred within a seven seconds time period. The evaluation of YouTube playback takes into account that YouTube dynamically adapts the video resolution to the available bandwidth. So, in addition to success ratios, start times and playouts without interruptions, the measurements also determined average video resolution.
All the tests were conducted with the bestperforming mobile plan available from each operator. Data scores account for 51 per cent of the total results.
Additionally, P3 conducted crowd-based analyses of the UK networks which contribute 15 per cent to the end result. They are based on data that were gathered in August, September and October, 2018. For the collection of crowd data, P3 has integrated a background diagnosis processes into 800+ diverse Android apps. If one of these applications is installed on the end-user’s phone and the user authorizes the background analysis, data collection takes place 24/7, 365 days a year. Reports are generated for every quarter of an hour and sent daily to P3‘s cloud servers. Such reports contain just a small number of bytes per message and do not include any personal user data. Interested parties can deliberately take part in the data gathering with the specific ”U get“ app (see box below on the right).
For the assessment of network coverage, P3 lays a grid of 2 by 2 kilometres over the whole test area. The “evaluation areas“ generated this way are then sub-divided into 16 smaller tiles. To ensure statistical relevance, P3 requires a certain number of users and measurement values per operator for each tile and each evaluation area. If these thresholds are not met by one of the operators, this part of the map will not be considered in the assessment for the sake of fairness.“Quality of Coverage“ reveals whether voice and data services actually work in an evaluation area. P3 does this because not in each area that allegedly provides network reception, mobile services can actually be used. We specify these values for the coverage of voice services (3G and 4G combined), data (3G and 4G combined) and 4G only.
Additionally, P3 investigates the data rates that were actually available to each user. For this purpose, we determine the best obtained data rate for each user during the evaluation period and then calculate their average value. In addition, we determine the so-called P90 values for the top throughput of each evaluation area as well as of each user‘s best throughput. P90 values specify the threshold in a statistical distribution, below which 90 per cent of the gathered values are ranging and depict how fast the network is under favorable conditions.
DATA SERVICE AVAILABILITY
Formerly called “operational excellence“, this parameter indicates the number of outages or service degradations – events where data connectivity is impacted by a number of cases that significantly exceeds the expectation level. To judge this, the algorithm looks at a sliding window around the hour of interest. This ensures that we only consider actual degradations as opposed to a simple loss of network coverage due to prolonged indoor stays or similar reasons. In order to ensure statistical relevance, each operator must have sufficient statistics for trend and noise analyses per each evaluated hour. The exact number depends on the market size and number of operators. A valid assessment month must comprise of at least 90 per cent of valid assessment hours. Deviating from the other crowd score elements, Data Service Availability is rated based on a six-month observation period – in this case from May to Oct 2018.
PARTICIPATE IN OUR CROWDSOURCING
Everybody interested in being a part of our global crowdsourcing panel and obtaining insights into the reliability of the mobile network that her or his smartphone is logged into, can most easily participate by installing and using the
“U get“ app. This app exclusively concentrates on network analyses and is available under http://uget-app.com.
“U get“ checks and visualises the current mobile network performance and contributes the results to our crowdsourcing platform. Join the global community of users who understand their personal wireless performance, while contributing to the world’s most comprehensive picture of mobile customer experience.
EE IS THE WINNER WHILST VODAFONE ACHIEVED A STRONG SECOND PLACE RANK. THREE SHOWS SOME SPECIFIC STRENGTHS BUT FELL BEHIND IN THE OVERALL SCORING, WHICH IS SIMILAR FOR VODAFONE. O2 PERFORMED SIMILARLY TO LAST YEAR.
EEs confirms its top rank from previous years and wins the P3 connect Mobile Benchmark in the UK for the fifth time in a row (in 2016, EE and Vodafone won together). Compared to last year’s results, EE improves significantly in the voice discipline, but achieves a slightly decreased data score. In the now extended crowd analyses, EE is also the winner. Vodafone achieves a good second rank, showing especially good results in the big cities and on the roads. In comparison to the previous year, Vodafone managed to improve in the voice results, but lost some ground in the data scores. The same is also true for Three, which fell back to the grade “satisfactory”. A remarkable strength of this smallest operator in the UK is that it achieved the best data score in towns. Like all UK operators, Three was also able to increase its performance on the roads for both voice and data. O2 ranks last. Showing more or less the same performance than last year costs some points, because the requirements of our testing methodology increase year over year. In the crowdsourced assessment, O2 achieved the second-best results, one point ahead of Vodafone. However, our newly added railway measurements reveal big challenges for all UK operators.