Operational Excellence

In the near future, we plan to complement our measurements with an additional assessment of service availability. For Australia this year, we present this crowdsourced approach as a case study – it will become part of the overall scoring next year.

An additional important aspect of mobile service quality – above performance and measured values –is the actual availability of the mobile networks to the customers. Obviously, even the best performing network is only of limited benefit to its users, if it is frequently impaired by outages or disruptions.

Therefore, P3 has been looking into additional methods for the quantitative determination of network availability, collecting data via crowdsourcing. This method must however not be confused with the drivetests described on the previous pages. We are convinced that crowdsourcing can significantly enhance the aspects of benchmarking in the future. The well-proven gathering of measurement values in drivetests and walktests has clear advantages, being conducted in a controlled environment. Crowdsourcing accelerates this practice when looking at time periods or geography beyond the driven route.

When it comes to diagnose the sheer availability of the respective mobile networks, a crowdsourcing approach can actually provide additional insights. Therefore, P3 has developed an app-based crowdsourcing mechanism in order to assess how a large number of mobile customers experience the availability of their mobile network. We call this aspect “operational excellence“.

However, crowdsourcing will only be able to deliver valid results when it is done right. This includes careful consideration of the statistical relevance of the probes as well as sophisticated post processing of the gathered results. P3 makes the very same high demands to the procedures and results of its crowdsourcing analyses as it does for its time-tested gathering and evaluation of measurement values. The underlying methodology of our crowdsourcing approach is described in detail under "Crowdsourcing Methodology" below. In the future, we envision these analyses to become part of the overall scoring of all our mobile network tests. But as we have been conducting this method in Australia only for a couple of months and have not yet reached statistically firm numbers of users for all tested networks within the months considered, we have decided to present the results as a case study this year. So, the established observations are not yet included in the score of our network test.

Nonetheless, in next year‘s P3 connect Mobile Benchmark in Australia, we expect our crowdsourcing results to become a part of the overall test score. The P3 connect Mobile Benchmark will then be the only mobile network test which combines both aspects (drive testing and crowdsourcing) and thus provides the most comprehensive view on network performance.

AUS_TotalCrowdScore2017_englisch.png

Operational Excellence At A Glance

Considering August, September and October of 2017, we did not observe any degradations in the networks of Optus, Telstra or Vodafone. The only identified glitches happened in the night hours between 12 am and 6 am which are deliberately excluded from our evaluation. A full 30 points for all tested networks should by no means be taken for granted, as a comparison to the investigations already conducted in other countries will show. It is, however, a very pleasing result and a valid reason to be happy for Australian operators and customers alike.

AUS_Degradation2017_englisch.png

Operational Excellence evaluation produces top crowd scores for all three Australian operators

For this case study, we have taken a closer look at the network availability in Australia for the months preceding and including our measurement tours – specifically August, September and October 2017. The described analysis of our crowdsourcing data establishes that the Australian networks are extremely stable and reliable. Actually, we could not observe any degradations in any of the three networks in the observation period. Some incidents actually were recorded, but they happened in the night hours between 12 am and 6 am. As operators typically shift repairs and network upgrades to these night hours, we have decided to exclude them from our analyses.

This result may look a little lackluster at first glance, but it is in fact very good news – confirming that all three Australian networks run very stable over longer periods of time. We have already conducted and published a number of other network tests in late 2017 (see "More Mobile Network Tests") that include crowd scores either already as part of the overall evaluation or as case studies. These examples substantiate that our methodology is by all means successful in detecting significant network degradations and that they do have the potential to impact the overall outcome of a network test. However, in Australia, the result at hand would obviously not have changed the overall ranking – this time.


Crowdsourcing Methodology

The mechanisms of our crowdsourcing analyses carefully distinguish actual service degradations from simple losses of network coverage. Also, the planned scoring model considers large-scale network availability as well as a fine-grained measurement of operational excellence.

For the crowdsourcing of operational excellence, P3 considers con-nectivity reports that are gathered by background diagnosis processes included in a number of popular smartphone apps. While the customer uses one of these apps, a diagnosis report is generated daily and is evaluated per hour. As such reports only contain information about the current network availability, it generates just a small number of bytes per message and does not include any personal user data.

Additionally, interested parties can deliberately take part in the data gathering with the specific ”U get“ app (see below).

In order to differentiate network glitches from normal variations in network coverage, we apply a precise definition of “service degradation“: A degradation is an event where data connectivity is impacted by a number of cases that significantly exceeds the expectation level. To judge whether an hour of interest is an hour with degraded service, the algorithm looks at a sliding window of 168 hours before the hour of interest. This ensures that we only consider actual network service degradations in contrast to a simple loss of network coverage of the respective smartphone due to prolonged indoor stays or similar reasons.

In order to ensure the statistical relevance of this approach, a valid assessment month must fulfil clearly designated prerequisites: A valid assessment hour consists of a predefined number of samples per hour and per operator. The exact number depends on factors like market size and number of operators.

A valid assessment month must be comprised of at least 90 percent of valid assessment hours (again per month and per operator). As these requirements were only partly met for the period of this report, we publish the Australian crowdsourcing as a case study.

Sophisticated scoring model

The relevant KPIs are then based on the number of days when degradations occurred as well as the total count of hours affected by service degradations. In the scoring model that we plan to apply to the gathered crowdsourcing data, 60 per cent of the available points will consider the number of days affected by service degradations – thus representing the larger-scale network availability. An additional 40 per cent of the total score is derived from the total count of hours affected by degradations, thus representing a finer-grained measurement of operational excellence.

Each considered month is then represented by a maximum of ten achievable points. The maximum of six points (60 per cent) for the number of affected days is diminished by one point for each day affected by a service degradation. One affected day will cost one point and so on until six affected days out of a month will reduce this part of a score to zero.

The remaining four points are awarded based on the total number of hours affected by degradations. Here, we apply increments of six hours: Six hours with degradations will cost one point, twelve hours will cost two points and so on, until a total number of 24 affected hours will lead to zero points in this part of the score.


Participate in our crowdsourcing

Everybody interested in being a part of our “operational excellence“ global panel and obtaining insights into the reliability of the mobile network that her or his smartphone is logged into, can most easily participate by installing and using the “U get“ app. This app exclusively concentrates on network analyses and is available under uget-app.com. “U get“ checks and visualises the current mobile network performance and contributes the results to our crowdsourcing platform. Join the global community of users who understand their personal wireless performance, while contributing to the world’s most comprehensive picture of the mobile customer experience.