The methodology of the P3 connect Mobile Review is the result of more than 15 years of testing mobile networks. Today, network tests are conducted in more than 80 countries. They were carefully designed to evaluate and objectively compare the performance and service quality of mobile networks from the users’ perspective.

The P3 connect Mobile Review Brazil is based on a sophisticated crowdsourcing approach.


The crowd-based analyses of the Brazilian networks consider crowd data that have been gathered in a period of three months. We conducted our analysis within a period ranging from December 2018 to February 2019. 

For the collection of crowd data, P3 has integrated a background diagnosis pro­cesses into 900+ diverse Android apps. If one of these applications is installed on the end-user’s phone and the user agrees, data collec­tion takes place 24/7, 365 days a year on this device. Reports are generated for every 15 minutes and daily sent to P3‘s cloud servers. Such reports ge­ne­rate just a small number of bytes per message and do not include any personal user data. Interested parties can deliberately take part in the data gathering with the specific ”U get“ app (see box below).

Other crowdsourcing solutions have a very technical user base. Thus, their results are typically skewed towards high-end, heavy data users. With the integ­ration into more than 900 diverse apps covering different market segments, P3 has generated data which is a fair and equal repre­sentation as opposed to that of classical speed test apps. The unique crowdsourcing technology allows P3 to collect data about real-world customer experience in a truly passive way – wherever and whenever customers use their smartphones. P3‘s crowdsourcing data set is the most realistic, since it is the most diverse that is currently available in the market in terms of locations, geography, times, devices, subscriptions, networks, technologies and smartphone usage patterns. P3 applies advanced big data analytics to distill the essence of information from the bulk data. By ana­lyzing data according to predefined metrics, P3 can provide information for the optimization of networks and also show whether networks live up to the expecta­tions of their customers.


For the assessment of network coverage, P3 lays a grid of 2 by 2 kilometres over the whole test area. The “evaluation areas“ (EA) generated this way are then sub-divided into 16 smaller tiles. To ensure statistical relevance, P3 requires a certain number of users and measurement values per operator for each tile and each evalua­tion area. If these thresholds are not met by one of the operators, this part of the map will not be considered in the  assess­ment for the sake of fairness.

The so called “Quality of Co­ve­rage“ is an additional KPI. It reveals whether voice and data services actually work in the respective evaluation area. P3 does this because not in each area that allegedly provides network reception, mobile services can actually be used. 

We specify these values for the co­verage of voice services (2G, 3G and 4G combined), data (3G and 4G combined) and 4G only.


Additionally, P3 investigates the data rates that were actually available to each user. For this purpose, we have determined the best obtained data rate for each user during the evaluation period and then calculated the average of these values. In addition, we have determined the so-called P90 values for the top throughput of each evaluation area as well as of each user‘s best throughput. 

P90 values specify the threshold in a statistical distribution, below which 90 per cent of the gathered values are ranging – or above which 10 per cent of the values are situated. These values depict how fast the network is under favorable conditions.


Another performance indicator considered in the crowd results is the data service availability. This parameter indicates the availabi­lity of a network and the number of outages or service degradations respectively. 

Striving to differentiate network glitches from normal variations in network coverage, we apply a precise definition of service degradation: A degradation is an event where data connectivity is impacted by a number of cases which significantly exceeds the expectation level. To judge whether an hour of interest is an hour with degraded service, the algorithm looks at a sliding window of 168 hours before the hour of interest. This ensures that we only consider actual network service degradations differentiating them from a simple loss of network coverage of the respective smartphone due to prolonged indoor stays or similar reasons.

In order to ensure the statistical relevance of this approach, a valid assessment month must fulfil clearly designated prerequisites: A valid assessment hour consists of a predefined number of samples per hour and per operator. The exact number depends on factors like market size and number of operators. A valid assess­ment month must include at least 90 per cent of valid assessment hours (again per month and per operator).


As this Mobile Review is intended just as an indication of network performance and quality, we abstained from grading the considered operators based on the number of total achieved points. However, the published percentages give a good indica­tion of their achievements.

U Get App Hand.png

Participate in our crowdsourcing


Everybody interested in becoming a part of our global crowdsourcing panel and obtaining insights into the ­reliability of the mobile network that her or his smartphone is ­logged into, can most easily participate by installing and using the “U get“ app. This app exclusively ­concentrates on network analysis and is available ­under

“U get“ checks and visualizes the current mobile network per­formance and contributes the results to our crowdsourcing platform. Join the global community of users who understand their personal wireless performance, while contributing to the world’s most comprehensive ­picture of the mobile customer experience.