| December 16, 2025

Loaded Latency and L4S: The Next Frontier for Network Performance

Access networks have gotten faster and more capable in recent years, thanks to improvements in fiber, DOCSIS, and 5G. These upgrades have pushed peak speeds higher, but throughput is only part of the experience. As more applications depend on real-time responsiveness, latency—especially under load—will play an increasingly important role in shaping overall user experience.

Many applications—cloud gaming, video conferencing, XR, and interactive voice and video AI models—depend on latency that stays low and stable. A network can perform well when traffic is light, with latency close to idle, but those conditions rarely reflect real-world network usage. Once background activity begins, packets start waiting in buffers and latency increases, even on fast connections. Loaded latency measures that effect directly by testing delays while the connection is under heavy use, and Ookla captures this behavior through its standard testing methodology.

The difference between idle latency and latency under load is becoming a defining factor for modern networks. With more network activity shifting toward real-time and interactive use cases, operators are focusing on how their networks perform during busy moments—not just how fast they appear under light conditions.

Low Latency, Low Loss, Scalable Throughput (L4S) is one of the most promising ways to keep latency stable under load as networks carry more real-time traffic. Operators enable L4S in the network, and applications benefit when their congestion-control algorithms understand those signals and adjust before users notice a delay. This article looks at why loaded latency matters, how L4S works, and what it enables across today’s networks. For a deeper discussion on loaded latency, check out our full webinar on demand.

Why Loaded Latency Defines Real-World Experience

A growing share of user activity now depends on latency staying low and stable, not just on fast speeds. Even small delays can interrupt timing-sensitive tasks, and those delays typically appear only when the network becomes busy. Loaded latency metrics capture this behavior by showing how performance changes under everyday multitasking—not just in controlled, low-traffic scenarios.

Measuring loaded latency also reveals behaviors that don’t appear in tests where the connection isn’t carrying much traffic. When large uploads or downloads begin, packets start accumulating in buffers and competing for scheduling, and delays can rise even though the connection may look fast under simple tests. Latency tests that measure only idle conditions rarely capture this difference, which is why a connection can appear fine in a quick check but struggle once everyday background activity kicks in.

The rise of real-time and interactive applications has made latency far more noticeable to users. Networks built primarily around throughput do not always maintain low delays once competing traffic appears, which is pushing operators to focus more on performance during busy moments—not just during minimal-traffic conditions.

To measure your own network’s loaded latency, simply run a Speedtest

How L4S Keeps Latency Low Under Load

Interactive and real-time applications place tighter demands on networks than activities like streaming or web browsing. These applications need latency to stay low and consistent, even when background traffic ramps up. Typical congestion control isn’t designed for that level of responsiveness because it waits for packet loss before signaling a slowdown—by the time loss occurs, users have already seen a frozen frame, lag spike, or audio glitch.

Low Latency, Low Loss, Scalable Throughput (L4S) is a network technology that solves that problem by signaling congestion early, before queues build and delays become noticeable. It uses explicit congestion notification (ECN) marks instead of relying on packet loss, giving applications a near-instant signal that they should adjust their sending rate.

This early warning system keeps queues short and delays close to the network’s idle baseline, even when the connection is fully utilized. In practice, this means:

  • Latency stays low under load
  • Minimal packet loss or retransmissions
  • Smoother performance for mixed real-time and background traffic
  • Applicability across cable, fiber, mobile, and fixed wireless access (FWA) networks

Another key advantage is that L4S doesn’t require new towers, radios, or major hardware overhauls. Operators enable it through software updates to network elements, and applications add support through ECN-aware congestion control. Once L4S is enabled in the network and supported by applications, improvements appear without requiring new infrastructure.

Why Operators Are Prioritizing Low-Latency Architectures

Operators are focusing more on latency than they used to, because it’s now affecting the parts of their business that matter most: support costs, customer satisfaction, and competitive differentiation. When delays spike during busy moments, subscribers interpret it as “the network isn’t working,” even when the underlying issue is momentary latency, not overall capacity. That perception directly affects retention and brand strength.

Many network designs were built to maximize throughput, not to keep latency steady during real-time interactions. That limitation becomes clear when everyday tasks overlap—like a cloud backup running while someone joins a video call. Background uploads sync while users interact with apps that expect instant responses, and those overlapping demands show how older network designs can allow delays to increase under load.

Technologies like L4S give operators new tools to address these architectural gaps. They reduce latency spikes during congestion, keep performance steadier across different types of traffic, and create measurable improvements operators can use for differentiation. A few key forces are driving L4S adoption:

  • More activity now happens at the same time on a single connection, making delay spikes far more noticeable to users.
  • Vendor support for L4S has matured, making it practical to deploy at scale.
  • Operators can roll it out incrementally, improving latency without replacing existing infrastructure

Keeping latency stable during busy periods is becoming a meaningful competitive advantage. The operators investing now are doing it to strengthen service quality, reduce support friction, and prepare for workloads that rely on tight timing rather than speed alone.

The Application Ecosystem Is Moving Toward Stable Low Latency

Many emerging applications require latency to stay low and consistent; even small increases in latency can disrupt the user experience, so many apps depend on mechanisms that prevent delays from rising when networks become busy. As L4S support expands across operating systems, browsers, and real-time audio/video systems, developers will gain a more reliable foundation for experiences that require low latency and immediate responsiveness.

Application support is essential because L4S only delivers its full value when software knows how to react to early congestion signals. When apps can interpret L4S feedback, they adjust their sending rates before delays become visible, keeping interactions smooth even when networks are busy. This coordination between networks and applications is what makes low-latency performance noticeable in real use—not just in controlled testing.

L4S adoption is accelerating in several areas:

  • Browsers are integrating L4S-aware feedback, especially through WebRTC.
  • Operating systems and devices are beginning to enable L4S, increasing the number of devices that can benefit.
  • Cloud gaming and interactive media platforms are testing L4S, improving responsiveness during busy periods.
  • Developers are gaining clearer signals to react to congestion, allowing their apps to adjust sending rates sooner.

These shifts point toward a broader move to more tightly timed digital experiences, including:

  • XR and spatial computing, which require the display to update immediately when the user moves.
  • Live collaboration tools that rely on immediate responsiveness.
  • AI-driven assistants and interactive agents that need smooth, fast exchanges to feel natural in voice and video models requiring cloud inferencing
  • New real-time applications that will emerge as latency becomes more predictable.

As more apps and platforms adopt L4S, users will benefit from smoother, more responsive performance in everyday interactions. In addition, operators may have opportunities to offer L4S-enabled service tiers for specific audiences—such as gamers—creating new ways to capture value from these improvements. 

The Future of Low-Latency Networking

The next generation of connected experiences will place even greater pressure on latency. Immersive XR environments, remote-operation scenarios, industrial automation, and interactive AI all depend on responses that stay smooth even when networks are busy. When delays increase, these experiences break down, making stable latency a core requirement for what comes next.

Technologies like L4S give operators a practical way to deliver the stable latency that emerging applications demand. As networks adopt modern congestion-control mechanisms like L4S and more applications learn how to react to those early congestion signals, users will see more consistent performance during busy periods.

Low-latency performance is becoming a core competitive requirement. Operators that invest early will be better positioned for the increasingly interactive workloads ahead—workloads that will place even greater emphasis on consistently low latency. To explore loaded latency and L4S in more detail, watch the full webinar on demand.

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.

| November 18, 2025

Public Safety Connectivity: How Agencies Can Strengthen Critical Communications

Connectivity failures are inconvenient for most people, but for public safety agencies they can be catastrophic. Whether coordinating evacuations, dispatching first responders, or keeping hospitals online, resilient networks are essential to saving lives and protecting communities.

Real-world disasters highlight the stakes. From wildfires in Maui and California to hurricanes across the Southeast, connectivity has been disrupted when agencies needed it most. In those moments, responders face delays, residents lose access to critical information, and hospitals struggle to coordinate care. The risks are clear: without resilient communications, every aspect of emergency response becomes harder and more dangerous.

Agencies need visibility into how networks perform in the real world. Ookla’s ecosystem—Speedtest, Downdetector, and Ekahau—provides the tools to strengthen preparedness, improve response, and accelerate recovery. 

Read on to learn why resilient connectivity is so vital, the key challenges agencies face, and what recent disasters like the Lahaina wildfires reveal about the stakes. To dig deeper, download our full guide: Building Resilient Connectivity for Public Safety and Emergency Management.

When Communication Fails, Safety Fails

Disasters often strike in chaotic conditions where infrastructure is already damaged or failing. Responders may be rushing into wildfires, floods, or tornadoes with limited visibility and unreliable networks. Hospitals and shelters may suddenly find themselves overwhelmed, with communications buckling under the weight of demand. In these situations, reliable communication can often determine whether help arrives on time.

Communication breakdowns ripple outward. A single dead zone can cut a fire crew off from dispatch. If hospitals can’t access patient records or coordinate ambulance arrivals, patients may not get the care they need in time. When dispatchers can’t relay caller details in real time, teams enter dangerous situations without critical information—and communication breakdowns can affect every group involved in an emergency response:

  • First responders: Without reliable coverage, teams may lose contact with dispatch or lack access to real-time data
  • Dispatchers: Disrupted networks hinder the ability to gather details from callers, delaying information for crews in the field
  • Fire teams: Loss of radio or mobile service can force reliance on hand signals or runners, slowing response when every second matters
  • Healthcare and EMS: Connectivity failures prevent hospitals and ambulances from accessing patient records or coordinating care, directly affecting outcomes

Loss of service and communication blackouts are not hypothetical risks. From Chief Barry Hutchings of the Western Fire Chiefs Association describing a fire scene with no portable radio coverage, to hurricanes and wildfires cutting off entire regions, the consequences are well documented. Reliable communications across response routes, hospitals, and community centers can mean the difference between a timely response and catastrophic outcomes.

Key Connectivity Challenges for Agencies

Agencies are often asked to deliver flawless communication in the most challenging environments. Rural areas stretch networks thin, mountains block signals, and older government facilities can block wireless coverage. During a disaster, even modern infrastructure can be compromised by fire, flood, or wind damage.

The problem goes beyond poor coverage or inadequate capacity; agencies also often lack detailed insight into how networks perform in specific areas. An agency may know a dead zone exists, but they often lack the data needed to demonstrate the problem and secure funding for improvements. In other cases, they may be flying blind during an outage, without real-time visibility into what has failed or how widespread the issue is. Without the right tools, even well-prepared teams can struggle to manage the connectivity challenges emergencies present:

  • Coverage and reliability gaps: Rural areas, mountainous terrain, and dense building materials can create persistent dead zones
  • In-building connectivity gaps: Older or secure government facilities often block signals and limit network upgrades
  • Outdated infrastructure and regulatory hurdles: Aging infrastructure and regulatory hurdles slow tower deployments and upgrades
  • Situational blind spots: Without real-time network data, agencies can often lack the visibility needed to pinpoint outages, understand their scope, and coordinate an effective response
  • Infrastructure vulnerabilities: Natural disasters can often damage physical infrastructure, creating extended blackouts
  • Funding constraints: Without concrete evidence of where and how networks are falling short, agencies can struggle to secure federal or state support for upgrades

These challenges leave agencies vulnerable. Without reliable coverage and visibility, response times slow, public trust erodes, and communities face greater risk during emergencies.

A Framework for Preparedness, Response, and Recovery

Public safety cannot be purely reactive. Agencies must plan in advance, monitor conditions as crises unfold, and evaluate how well systems recover once the danger passes. The emergency management lifecycle—preparedness, response, recovery—ensures that agencies are not just reacting, but instead building long-term resilience.

In practice, responsibilities for each stage of that lifecycle are typically split across different teams. One group may focus on planning coverage improvements, another may monitor outages as they occur, and another might validate in-building Wi-Fi performance. Without a unified view, important gaps can go unnoticed. 

To close those gaps, agencies need integrated solutions that connect every stage, from pre-disaster planning through post-disaster recovery. A complementary mix of network performance data from Speedtest Intelligence®, website and service outage insights from Downdetector®, and wireless survey capabilities from Ekahau help ensure that each phase of the emergency management lifecycle is supported with the right visibility and intelligence:

  • Preparedness: Agencies use Speedtest Intelligence® data to identify coverage gaps, assess high-risk zones, and validate network upgrades. Public safety IT teams also use Ekahau tools to conduct wireless surveys and verify network performance in critical locations such as hospitals, command centers, and shelters
  • Response: Downdetector® detects website and service outages in real time, giving agencies early awareness of issues. Meanwhile, Speedtest provides immediate visibility into performance changes, while Ekahau validates temporary networks in shelters or mobile command posts.
  • Recovery: Agencies measure restoration speed, validate coverage improvements, and document outcomes to inform future investments. Downdetector and Speedtest data help secure funding by showing where networks fail during emergencies and measuring how quickly they recover.

The emergency management lifecycle—preparedness, response, recovery—ensures agencies are not only reacting in the moment but building more resilient systems for the future.

Lessons from Lahaina

When wildfires tore through Lahaina, Hawaii in August 2023, connectivity collapsed when residents and emergency managers needed it most. Evacuees had little information about safe routes, and responders struggled to understand whether networks were down locally or across entire islands. Without visibility into network conditions, emergency responders could not determine where they could reach people and where communications had already failed.

Tools like Downdetector and Speedtest provided critical real-time visibility into network conditions. By combining outage reports with performance data, agencies gained the situational awareness they needed to prioritize limited resources and focus on areas most in need.

The insights revealed a clear picture of how the crisis was unfolding and how that visibility informed response decisions. Downdetector tracked sudden spikes in outage reports, while Speedtest Intelligence revealed steep declines in network performance. Together, those insights allowed responders to distinguish between isolated disruptions and broader failures, helping prioritize key resources. The Lahaina fires show how connectivity insights can be as essential as water or fuel when disaster strikes.

The lesson from Lahaina is clear: visibility into connectivity provides essential intelligence during disasters. Identifying where networks fail and how they recover enables agencies to coordinate more effectively with providers, support first responders, and keep communities informed as conditions evolve.

Conclusion

Public safety and emergency management agencies cannot afford uncertainty in communication. Reliable networks are the foundation of preparedness, response, and recovery—and the consequences of failure are too great to ignore.

Ookla’s ecosystem of Speedtest, Downdetector, and Ekahau gives agencies the visibility, reliability, and security they need to protect communities. With better data, decision-makers can plan smarter, respond faster, and restore service more effectively when disaster strikes.

To learn how your agency can strengthen its response capabilities and ensure networks are resilient when it matters, check out our full guide, Building Resilient Connectivity for Public Safety and Emergency Management.

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.

| November 6, 2025

Why Satellite Broadband Is Becoming a Bigger Part of U.S. Rural Connectivity Plans

Expanding broadband isn’t just about laying more fiber. It’s about finding practical ways to reach the rugged, rural, and hard-to-serve places where traditional infrastructure projects are slow or cost-prohibitive. That challenge is at the heart of the National Telecommunications and Information Administration’s BEAD program, which is fueling one of the largest broadband infrastructure efforts in U.S. history. Fiber networks remain at the center of those plans, but the high cost and complexity of reaching remote regions means fiber alone won’t connect everyone quickly or affordably.

Because fiber and fixed wireless can’t cover every corner of the country, states are increasingly turning to satellite broadband to reach the most challenging locations. Low-Earth orbit (LEO) providers like Starlink and Amazon Kuiper are being factored into BEAD strategies as practical solutions for areas where traditional infrastructure isn’t financially or logistically viable. States like Maine and Hawaii have already used satellite service to reach homes in remote and geographically complex areas where fiber or fixed wireless deployments would be slow or cost-prohibitive.

For states looking to connect their most difficult-to-serve communities, understanding how satellite fits into the rural broadband mix is becoming essential. Watch our recent webinar where we’re joined by broadband leaders from Maine and Hawaii for a discussion on performance trends, policy implications, and the evolving role of satellite broadband.

Satellites Are Playing a Bigger Role in BEAD Allocations

Extending broadband into rural, sparsely populated, and geographically challenging areas has always required tradeoffs between cost, timelines, and technology. Fiber may deliver the strongest long-term performance, but extending it to extremely rural or geographically isolated areas can cost hundreds of thousands of dollars per location and take years to complete. In Alaska, for example, Quintillion Subsea is receiving more than $113,000 per address to extend fiber—underscoring just how expensive these builds can be. Those realities have forced states to look at a wider mix of technologies, and satellite connectivity has quickly become part of that conversation.

BEAD allocations are already reflecting this shift. While fiber remains the dominant technology, satellite internet’s growing share shows that states aren’t treating it as a niche option; many are planning for it as a complementary piece of their buildout strategies—particularly in places where fiber or fixed wireless is too expensive or complex to deploy.

  • Fiber is receiving bulk of BEAD funds: Fiber accounts for the majority of BEAD allocations, but satellite internet has carved out a meaningful share of initial awards across several states.
  • Starlink and Kuiper entering the picture: Starlink currently represents about 3% of BEAD funding awarded so far, with Amazon Kuiper just under 1%.
  • Rising confidence in satellite internet: The share of BEAD dollars directed to satellite internet signals increasing trust in the technology as a practical option for reaching rural and hard-to-serve communities.

Even a small share of BEAD funding can cover areas where fiber builds would have stalled or taken years. Satellite connectivity is moving from a fallback option to a planned part of many states’ broadband strategies.

Real-World Deployments Show How States Are Using Satellites

The shift toward satellite connectivity is happening now. States are already using LEO satellite service to close stubborn coverage gaps that traditional infrastructure can’t reach quickly or affordably. Maine and Hawaii offer two clear examples of how the technology is being put to work today.

These states face some of the toughest connectivity challenges in the country—from remote islands and mountainous terrain to areas where no infrastructure exists at all. Instead of waiting for long fiber construction timelines, both turned to satellite as a fast bridge to reliable service. In the webinar, we got a closer look at how Maine and Hawaii are using satellite:

  • “Working Internet ASAP” connecting unserved homes: Maine’s Working Internet ASAP program provided more than 8,800 unserved locations with free Starlink kits and installation, focusing on households with no service options of any kind.
  • Hawaii blending fiber and satellite: Hawaii’s approach combines fiber and satellite internet to reach rural areas, where cutting through lava rock or laying undersea cables would be prohibitively expensive.
  • Early deployments tied to BEAD: Both states are aligning their satellite connectivity efforts with BEAD planning so those initial builds can transition smoothly into long-term programs.

The early adoption of satellite internet reflects both a shift in policy and a leap in performance, moving the technology from a last-resort option to an intentional part of state broadband strategies.

Strengths and Limits of LEO Satellite Technology

LEO satellite connectivity has advanced quickly in the past decade. The technology is now capable of delivering broadband speeds to places that were once all but unreachable. Locations that would have required massive fiber investments or been written off entirely can now be connected far more quickly. That shift is reshaping how states and providers think about rural deployment strategies.

Massive increases in spectral efficiency, falling launch costs, and cheaper user equipment have made satellite internet both faster and more widely available. Several technical and economic factors are driving this expansion, while also shaping where the technology is most effective.

  • Localized congestion remains a factor: Network slowdowns can occur in high-traffic areas, as seen in Pershing County, Nevada during the Burning Man festival.
  • Spectrum reuse driving capacity gains: Satellites now use more focused “spot beams” that cover smaller geographic areas. Dividing coverage into smaller zones allows providers to reuse the same frequencies in different places, which increases total network capacity without needing additional spectrum.
  • Lower costs enabling large constellations: Falling launch and build costs have made it financially feasible to deploy thousands of satellites, dramatically expanding the scale and reach of satellite internet networks.
  • Wider coverage, but limited density: Satellites can now cover nearly every corner of the country, but overall capacity remains best suited for low-density regions. Heavy usage in concentrated areas can still strain the network, and in some locations providers have introduced usage tiers or surcharges to manage excess demand.

Satellite connectivity plays a critical role in reaching rural and remote communities where fiber or fixed wireless is impractical or too expensive. It works best as one piece of a broader broadband strategy that blends multiple technologies to reach every corner of a state.

Reliability, Compliance, and Performance Monitoring

When states invest millions to bring broadband to rural communities, delivering a signal isn’t enough. Those connections need to support everyday needs like work, school, telehealth, and emergency services with consistent speeds, low latency, and reliable uptime (the amount of time a connection is available and working as expected), giving users a dependable experience day in and day out. To make sure that happens, states are moving beyond one-time performance checks at installation—where service is validated only on day one—and putting systems in place to measure how well connections perform over time.

Starlink’s latency in the U.S. averages around 40 milliseconds, well below BEAD’s 100 ms requirement—a strong indicator that the technology can meet performance targets. But environmental factors can still affect individual sites. Snow, ice, or tree cover can interfere with line-of-sight and impact connection quality, though professional setups help minimize those disruptions. States are starting to define how they’ll verify performance, ensure service meets funding benchmarks, and build accountability into satellite deployments.

  • Independent verification tools: Speedtest and other third-party platforms can help verify that real-world performance matches program requirements.
  • Strong reliability signals in Maine: The state has reported minimal complaints from satellite internet users, a good indicator of reliable service in hard-to-serve areas.
  • Hawaii adapting regulatory frameworks: Hawaii is modifying existing regulatory frameworks to ensure providers meet performance expectations under BEAD-funded deployments.
  • Enforcement mechanisms still developing: Oversight and accountability frameworks are expected to mature as satellite deployments scale.

Satellites can bring broadband to rural communities quickly, but speed alone isn’t the goal. States are putting new systems in place to make sure that connectivity remains consistent, reliable, and measurable over time.

Competition and Capacity Will Shape What Comes Next

Satellite connectivity is moving into a new phase—one defined less by proving it works and more by deciding how to use it at scale. As states plan their long-term broadband strategies, they’ll be weighing technical tradeoffs, provider options, and capacity constraints in ways they haven’t had to before.

Amazon Kuiper’s upcoming commercial launch will introduce real competition for Starlink, giving states more than one major provider to consider for BEAD-funded deployments. Starlink relies on Ku band spectrum, which is generally less sensitive to weather interference, while Kuiper will use Ka band spectrum, which can support stronger uplink capacity but may be more vulnerable to signal loss in heavy rain.

The combination of band choice and network architecture will shape how each service performs and where it fits best. As competition heats up, several factors will shape how states evaluate satellite providers under BEAD.

  • Kuiper entering the market: Amazon Kuiper’s commercial launch will bring new competitive pressure to Starlink’s early lead, giving states more leverage and flexibility in future deployments.
  • Band differences shaping performance: Ku-band (used by Starlink) is less sensitive to weather, while Ka-band (planned for Kuiper) can support stronger uplink performance but may be more vulnerable to interference. These tradeoffs will influence where each provider’s technology is best suited.
  • Scaling capacity as a key challenge: Expanding network capacity as more users come online will be critical to maintaining performance, particularly in rural areas with seasonal demand spikes or high-density events.

As satellite competition ramps up, states will need to balance cost, coverage, and long-term performance when deciding how these technologies fit into their broadband strategies. The choices they make in the coming years—about providers, technologies, and capacity planning—will shape how quickly and reliably rural communities get connected.

Conclusion

Satellite broadband is no longer a fringe technology. It’s being deployed today in some of the toughest connectivity environments in the U.S., and BEAD allocations show it’s becoming part of state-level planning in a meaningful way. Maine and Hawaii are proving what’s possible when satellites are used strategically, while performance improvements make the technology more viable every year.

As competition increases and deployment strategies mature, satellites are poised to play an integral role in helping close the digital divide, complementing fiber and fixed wireless to deliver broader, faster, and more resilient connectivity. 

To learn more about the emergence of satellite internet, watch our full webinar on demand, “Satellite Internet Uncovered: Performance Trends and Policy Implications.” 

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.

| October 23, 2025

How Speedtest Insights™ Helps Operators Meet Modern QoE Expectations

Designing and operating mobile networks is more complex than ever. End users judge networks not just by their speed or coverage, but by how well they support everyday digital experiences, from smooth video playback to responsive apps and quick page loads. But delivering that level of service is increasingly difficult as traffic grows, applications become more demanding, and user expectations continue to rise.

To provide the seamless, high-quality experience users expect, operators need deep visibility into both network performance and quality of experience (QoE). That visibility shows how networks behave under real-world conditions and reveals where improvements are needed most, giving operators the insight to identify issues, prioritize upgrades, and deliver consistently smooth service. Speedtest Insights™ brings QoE and network performance data together in one place, giving operators a complete view of how users experience connectivity.

In this article, we explore how that visibility helps operators identify performance issues, benchmark against competitors, and take targeted action. For a deeper look at QoE and real-world network performance, watch our on-demand webinar, “How to Optimize Mobile Networks for Today’s QoE Requirements.”

Beyond Coverage: The Role of QoE in Network Performance

Speed is a fundamental benchmark for measuring connectivity, but quality of experience depends on far more than a single metric. Ultimately, QoE is defined by how reliably networks support everyday activities such as video streaming, web browsing, and gaming. Even a fast connection that produces laggy apps or sluggish page loads will leave users frustrated, which is why QoE has become a more meaningful measure of performance than speed alone.

QoE data provides a deeper view of how networks perform under real-world conditions and where users encounter friction. Key measurements include:

  • Page failure rates show how often sessions fail to load
  • File transfer throughput reflects how quickly large files move across the network
  • Latency indicates how responsive the network is for real-time applications
  • Rebuffering times highlight interruptions during video playback

Instead of relying only on conventional metrics or waiting for customer complaints, operators can use QoE data to anticipate where service may fall short. That visibility provides a clearer picture of the user experience and highlights where investments will deliver the greatest return.

Map image of QoS and QoE in Spain

Roaming Performance: Visibility Beyond Connectivity

Whether subscribers are traveling abroad or simply moving outside their home network area, connecting to a roaming network is only part of the equation. Users expect their apps, video calls, and file transfers to work just as reliably while roaming as they do on their home networks, yet degraded performance in these scenarios can quickly undermine satisfaction and loyalty.

QoE performance data makes it possible to uncover where roaming users encounter poor performance. For example, in our recent webinar on optimizing mobile networks for today’s QoE requirements, an analysis of roaming traffic between Indonesian subscribers and a Saudi Arabian network showed that while local users typically achieved download throughput of around 46 Mbps, roamers were capped at just 6.5 Mbps. Latency also spiked for roamers, jumping from 51 ms for local subscribers to 270 ms for roamers. 

Those differences might look like technical details on paper, but in practice they can make real-time services like video calls nearly unusable. With access to detailed performance data and QoE insights, operators can:

  • Compare roaming and local user experiences side by side
  • Detect discrepancies that point to misconfigured interconnects, throttling policies, or traffic prioritization issues
  • Investigate additional KPIs like page failure rate or DNS resolution times to pinpoint root causes

Detailed performance insights give operators a chance to address roaming issues before they trigger complaints or cause customers to switch providers. Taking action early not only protects the user experience but also builds confidence that subscribers will stay connected and satisfied wherever they travel.

Benchmarking Competitive Performance With Granular QoE Data

Competitive benchmarking gives operators a clear picture of how their network performance compares to that of others in the market. Indeed, the ability to compare network performance side by side with competitors helps operators make smarter infrastructure decisions, prioritize upgrades, and focus resources where they’ll have the greatest impact.

In our recent webinar, we looked at how multi-ping latency to content delivery networks (CDNs) varied for a major U.S. operator. At a national level, the operator’s own median latency appeared competitive, but that overall figure masked significant regional performance issues. Specific markets, including Mississippi and Louisiana, showed latency levels roughly 60% higher than the operator’s national average. Those regional gaps directly affect how users experience the network, often translating into poor streaming, lagging apps, and delays in content loading.

Tools like Speedtest Insights give operators the granular data they need to benchmark QoE performance effectively and turn competitive comparisons into targeted network improvements:

  • Identify regions where latency, throughput, or load times fall behind competitors
  • Correlate network conditions with user experiences such as slow content delivery
  • Prioritize infrastructure investments where improvements will make the biggest difference

With detailed competitive insight, network teams can focus their efforts on changes that customers will actually notice. Targeted network upgrades and service enhancements not only close performance gaps but also improve the overall user experience and strengthen the provider’s position in the market.

Map of "Perform Competitive Benchmarking: Latency Performance"

Diagnosing Issues With Routing and Load Times

Seemingly minor network changes can create major performance problems if left undetected. Page load delays caused by inefficient routing, for example, can frustrate users even when overall network capacity looks healthy. Speedtest Insights helps operators identify these subtle problems early, revealing where performance is being affected before it disrupts the user experience.

One example from our recent webinar showed how routing changes can directly impact user experience. On one operator’s network in Qatar, page load times for Google and YouTube suddenly doubled—not because of a problem with the content itself, but because traffic was being routed through a server located much farther away from end users. That additional distance increased latency and, in turn, increased page load times.

With detailed performance data and QoE insights, network teams can:

  • Detect sudden spikes in page load times and connect them to specific routing changes
  • Track performance over time to verify the effectiveness of network fixes and upgrades
  • Identify pockets of degradation that would otherwise be masked by aggregate metrics

Detailed visibility into routing-related performance issues is essential for maintaining a high-quality user experience. This insight helps ensure that network changes lead to measurable improvements rather than introducing new performance bottlenecks.

image of Quality of Experience Datasets

Turning Insight Into Action With Speedtest Insights

Understanding network performance is only valuable if it leads to real improvements. Once operators know what is undermining the user experience—whether it’s roaming latency, regional CDN delays, or routing inefficiencies—the next step is using that information to make the network better. That’s where Speedtest Insights plays a key role: transforming billions of real-world data points into practical intelligence that guides day-to-day decisions.

Speedtest Insights brings QoE and network performance data together in a single platform, making it easier for teams to analyze trends, investigate anomalies, and track the impact of changes over time. It enables operators to:

  • Aggregate QoE and network data into intuitive dashboards with flexible filtering by region, technology, SIM type, and service
  • Visualize performance patterns geographically to spot emerging issues before they spread
  • Drill down from high-level KPIs to detailed metrics like latency, rebuffering, and page failure rates
  • Export raw data for deeper analysis or integration into internal monitoring and planning tools

With these capabilities built into their workflow, network teams can move beyond reacting to problems and instead proactively make decisions that improve service quality. Proactive decision-making strengthens the user experience, reduces the risk of churn, and keeps operators ahead of evolving performance demands.

Conclusion: Delivering the Experiences Users Expect

Internet users today evaluate networks based on real-world performance, from clear video calls to instant web page loads and uninterrupted gaming. Delivering this level of performance requires visibility into how networks behave under real-world conditions and tools that turn those insights into targeted improvements.

Speedtest Insights equips operators with those capabilities. The platform brings proactive monitoring, granular benchmarking, and detailed root cause analysis together in one place, helping network teams understand, diagnose, and improve QoE across every layer of the network. These capabilities enable providers to deliver a more reliable, responsive experience that keeps users connected and loyal.

Illustrative graphic for opening title: How to Optimize. Mobile Networks for Today's QoE Requirements

To explore these strategies in greater depth and see Speedtest Insights in action, watch our full webinar on optimizing mobile networks for today’s QoE requirements.

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.

| October 1, 2025

RAN Planning Made Smarter: Using Real-World Data to Solve Coverage and Performance Challenges

Designing, optimizing, and maintaining mobile networks is more complex than ever. Subscriber expectations for faster speeds, seamless 5G coverage, and consistent quality of experience (QoE) continue to climb while operators face the ongoing challenges of rising traffic, dense urban environments, and the demands of 5G rollouts. Meeting those challenges requires more than strong infrastructure; it requires a deep understanding of how users actually experience the network.

Think about the moments that shape user perception: a dropped call in a busy stadium, slow email downloads during a morning commute, or a video buffering at home. These experiences often determine whether customers stay loyal or start looking elsewhere. For operators, the key to avoiding service disruptions that frustrate users lies in smarter network planning and optimization.

Speedtest Insights™ equips network teams with the data they need to identify and resolve performance issues. With billions of real-world samples collected daily, the platform helps network teams pinpoint problem areas, prioritize investments, and make data-driven improvements. Read on to learn how operators can address coverage gaps, manage congestion, and stay ahead of competitors. For a deeper exploration of these strategies, download our guide, Unlocking RAN Potential: A Guide to Network Planning and Optimization with Speedtest Insights.

Addressing Coverage Gaps

Coverage gaps remain one of the most obvious pain points for mobile subscribers. A dead zone on a highway, weak indoor coverage in a mall, or patchy service in a residential area can quickly damage brand reputation. Dead zones and weak coverage are especially frustrating because they conflict with users’ expectation of being connected everywhere.

For operators, failing to address coverage holes is more than an inconvenience—it’s a business risk. Coverage gaps create negative customer experiences, limit revenue opportunities, and open the door for competitors to win over dissatisfied users. Delivering consistently strong coverage is a baseline requirement for staying competitive, and Speedtest Insights helps network teams do exactly that by enabling them to:

  • Verify coverage levels and the quality of network footprint
  • Locate weak coverage areas and target improvements like repeaters or antenna tilt adjustments
  • Identify the best locations for new cell sites to expand coverage into underserved areas

Reliable coverage builds trust with subscribers and prevents rivals from gaining ground in areas of weakness. Speedtest Insights reveals where networks fall short, enabling operators to take targeted action that improves the user experience.

Managing Capacity and Congestion

Strong coverage alone does not guarantee a smooth user experience, even in areas where network performance is strong during quieter times. In crowded environments, whether streaming video on a train or uploading photos at a concert, network performance can still degrade due to congestion, capacity limits, and other factors, demonstrating that coverage is only one part of the connectivity picture.

Addressing these challenges is critical. Poor speeds during peak times lead to complaints, damage brand reputation, and can push subscribers toward competitors. Understanding how capacity and performance shift under heavy demand is critical for operators in building resilient networks and retaining customers. 

Speedtest Insights helps by revealing how networks perform under real-world peak conditions, highlighting where capacity is strained and improvements are needed:

  • Identify areas that perform well during off-peak hours but degrade during busy periods
  • Adjust antennas or layers to redistribute traffic loads more effectively
  • Spot potential new cell site opportunities where demand consistently outpaces existing capacity

Managing capacity proactively ensures a consistent user experience no matter the time of day or size of the crowd. With Speedtest Insights, operators can see where networks strain under pressure and take action to minimize the impact on subscribers.

Managing and Mitigating Interference

Strong signal bars don’t always equal strong performance. Subscribers can experience dropped calls or poor connections even when the signal looks fine. Often, the culprit is interference—competing signals or noise that disrupts performance and frustrates users.

Interference is more than just a nuisance for operators. Left unaddressed, it can degrade network performance, increase customer complaints, and reduce the return on infrastructure investments. Identifying where interference occurs and understanding its impact is critical for delivering the quality subscribers expect. Reducing interference and its effects begins with actions like:

  • Plot coverage versus quality maps to quickly identify areas with downlink interference
  • Correlate poor quality with strong signal levels to pinpoint areas where noise or overlapping cells cause degradation
  • Use RF adjustments such as antenna tilts or power reduction to mitigate issues

Interference remains one of the most common barriers to delivering consistent quality of experience. Managing and mitigating it effectively enables network teams to apply targeted fixes and maintain reliable performance across the entire footprint.

Staying Ahead of Competitors

In a highly competitive market, network performance is more than a technical metric or marketing message; it’s a key competitive differentiator. That’s why competitive benchmarking goes far beyond basic market research. It helps operators pinpoint areas of competitive strength, uncover performance gaps, and see how their network stacks up in the places that matter most to customers. Turning those insights into action starts with steps like these:

  • Benchmark performance against competitors across metrics like latency, RF quality, and video streaming quality.
  • Track competitor rollouts and identify weaknesses to target.
  • Use performance data to shape marketing strategies and back up network leadership claims.

Competitive benchmarking informs both technical decisions and commercial strategy. Speedtest Insights equips operators with the intelligence to identify strengths, close performance gaps, and make improvements that secure long-term advantages.

Looking Ahead

Mobile networks will continue to evolve under mounting pressure as user expectations rise, demand increases, and the need for consistent performance grows across every environment. Operators aiming to maintain a competitive edge will need more than traditional metrics; they will require a clear view of how networks perform from the subscriber’s perspective.

Speedtest Insights provides that visibility and depth of understanding. Real-world performance data helps operators pinpoint weaknesses, validate network improvements, benchmark against competitors, and prioritize investments with greater precision. That insight makes it possible to deliver the quality of experience customers expect today while preparing for the demands of tomorrow.To explore practical strategies for RAN planning and optimization, download our guide, Unlocking RAN Potential: A Guide to Network Planning and Optimization with Speedtest Insights, featuring real-world use cases to help operators maximize network performance.

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.

| September 30, 2025

Speedtest Certified™: Defining Network Excellence for Properties

We’ve all felt the frustration of arriving at a hotel, airport, or event only to realize the internet connection doesn’t deliver. It’s a shared challenge: guests demand seamless connectivity, and property owners know poor performance can cost them business. Behind the scenes, they invest heavily in building and maintaining high-performing networks, but until now, there hasn’t been a trusted way to prove that effort.

That’s why Ookla is launching Speedtest Certified™. Built on the trusted accuracy and brand recognition of Speedtest®, the program gives properties a clear and credible way to prove their networks deliver strong, reliable performance. Speedtest Certified is a visible badge of connectivity excellence—something consumers can recognize instantly and businesses can showcase with confidence.

The Speedtest Certified Competitive Edge 

Speedtest Certified turns proven connectivity into a visible advantage, creating value for both property owners and the people who rely on their networks. For property owners, it provides a credible, third-party method to differentiate their locations and prove their commitment to exceptional digital experiences. For guests, tenants, and visitors, it offers a simple and trusted way to identify locations with certified wireless connectivity.

  • Hotels & Resorts: Wi-Fi is a fundamental expectation that directly impacts revenue by shaping booking decisions, event contracts, and overall guest loyalty. Speedtest Certified turns a hotel’s network into a marketing asset, helping attract high-value guests, secure event business, and justify premium room rates.
  • Multifamily Residential: For residents, internet access is as essential as power or water. Certification gives property managers clear proof that their buildings deliver the reliable connectivity people expect at home. That assurance helps attract new tenants, improve retention, and increase long-term property value.
  • Stadiums & Large Public Venues: From digital ticketing to mobile ordering and cashless payments, a stadium’s core operations depend on a strong network. Verified proof of network quality gives venues the credibility to book top events, generate more in-stadium revenue, and deliver the digital experiences fans and sponsors expect.
  • Commercial Real Estate Developments: Digital infrastructure is a powerful differentiator in a competitive market. Certification highlights buildings that deliver reliable connectivity, making it easier to attract and retain tenants while commanding premium rates.
  • Businesses & Workplaces: Justifying infrastructure investments to leadership can be a challenge for businesses of any size. Speedtest Certified provides independent, data-driven proof to validate network spending and demonstrate ROI. The certification turns technical success into a visible badge of excellence, giving stakeholders clear metrics that show the network is built for resilience, employee satisfaction, and productivity.

Speedtest Certified brings clarity and confidence to connectivity. For property owners, it turns network performance into a visible, marketable advantage that builds trust with customers and stakeholders alike. For consumers, it removes the uncertainty around whether a venue can deliver the connected experiences they expect.

Ookla’s Comprehensive and Trusted Methodology

The Speedtest Certified methodology sets the standard for network verification, drawing on multiple datasets across technologies to deliver objective proof of performance, ensuring every certification reflects real-world conditions.

Comprehensive On-Site Network Assessment

Using Ekahau’s professional-grade tools, the on-site evaluation assesses the Wi-Fi network end to end. It captures metrics such as signal strength and quality, signal-to-noise ratio, and network capacity. The assessment also reviews channel overlap and interference, SSID configuration, roaming efficiency, spectrum compliance, and authentication and encryption protocols. Together, these factors provide a detailed picture of Wi-Fi RF quality, network configuration, and overall security posture.

Real-World Performance Testing

The assessment validates the network’s ability to deliver a consistent, high-quality user experience. An advanced Speedtest process measures key performance indicators such as download and upload speeds, latency, and jitter—metrics that directly impact how users experience the network. It also evaluates quality of experience across common real-world activities, including short-form video, web browsing, content delivery, cloud applications, gaming, messaging, video conferencing, and file transfers. The assessment confirms whether the network can reliably support the demanding applications users depend on every day.

Infrastructure Readiness and Resilience

Speedtest Certified evaluates the core infrastructure that supports the network to ensure it is built for reliability and security. The assessment verifies hardware integrity, examines ISP backhaul capacity to confirm it can support multiple users simultaneously, and checks for redundant connections that maintain service during outages. It also reviews connection types (fiber, DSL, microwave) to understand impacts on latency, reliability, and scalability, as well as the availability of diverse ISP links to strengthen resilience. Finally, the assessment considers access point (AP) models and age, including whether equipment is end-of-sale or end-of-life and if it supports the latest Wi-Fi standards.

Together, these assessments ensure that any property earning Speedtest Certified wireless excellence has met rigorous standards. The methodology provides clear, independent validation of network quality and a trusted evaluation of enterprise-level deployments.

Gain Visibility into Real-World Network Performance

Measuring network performance is only the first step; translating those results into actionable intelligence is what truly drives improvement. The Speedtest Certified Digital Platform gives network owners and partners a clear view of key performance insights. Its dashboards transform on-site assessment results into detailed analyses of performance, configuration, and infrastructure readiness.

The Speedtest Certified Digital Platform ensures that certification delivers ongoing value—not just a one-time assessment. It provides the objective metrics needed to validate investments, guide improvements, and stand out in a crowded market.

A Collaborative Path to Certification

A reliable certification process should do more than check boxes;  it should give property owners confidence that their network has been thoroughly evaluated and benchmarked against the highest standards. That’s the goal of Speedtest Certified, and it is why we designed the process as a true partnership.

From the outset, we work directly with your team to confirm key details and define the project scope. An accredited assessor then conducts an on-site assessment under real-world conditions, capturing the end-to-end digital experience across the property.

Once the assessment is complete, we review the results together, highlighting strengths and identifying any areas for improvement. This ensures that properties not only achieve certification but also gain a clear roadmap for continued optimization.

During the process, customers can expect:

  • Project scoping: Confirming key details like property size and type to determine the precise scope and level of investment required.
  • On-site evaluation: A professional on-site assessment that measures real-world connectivity across the entire location.
  • Actionable insights: An assessment that validates network strengths and highlights opportunities for improvement.

The certification process ensures every property walks away with more than a seal of approval: it delivers confidence, clarity, and a path to ongoing success.

Certify Your Property

Speedtest Certified sets a new industry standard for network verification and tackles a problem everyone has faced: arriving at a hotel, airport, or venue to find the internet doesn’t work as expected. The program’s comprehensive, data-driven methodology ensures that consumers can confidently choose locations with proven performance, while property owners gain validation of their network’s technical excellence and the insights needed for continuous improvement and a lasting competitive edge.

With Speedtest Certified, Ookla is extending its trusted measurement into the environments where connectivity matters most. The program creates a clear benchmark for excellence and offers property owners a powerful way to showcase their commitment to reliable, high-performing networks. 

We invite businesses to join us in defining the future of verified connectivity. Visit the Speedtest Certified website to learn how to certify your property and give customers the confidence of a truly exceptional connected experience. 

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.

| September 29, 2025

When Networks Fail: Lessons from Recent Outages on Building True Digital Resilience

Network outages used to be contained problems—a cell tower lost power, customers in that area lost service, and technicians restored connectivity within hours. The digital environment has since changed dramatically, with failures now spreading quickly across interconnected systems that society depends on. The April 2024 power grid collapse across Spain and Portugal left tens of millions without mobile service for up to 24 hours. The CrowdStrike incident in July grounded flights, shut down hospitals, and silenced news broadcasts globally. Google Cloud Platform outages cascaded through services like Spotify, taking entire digital ecosystems offline. What changed is not just the scale of these failures, but also how quickly problems spread across interconnected systems essential to economic and social activity.

The root cause lies in digital transformation itself. Organizations have gained tremendous capabilities through cloud services, managed providers, and interconnected networks, but they’ve also created new vulnerabilities. A single software update can now ripple through thousands of companies. A power outage in one region can disable mobile networks across multiple countries. Supply chain dependencies stretch across continents, making it nearly impossible for any single organization to control all the factors that affect their service reliability. Network resilience has shifted from an operational concern to a strategic necessity, as its absence poses systemic risks to entire economies.

In this article, we’ll examine how recent major outages reveal the true nature of network vulnerabilities, explore frameworks that leading operators and policymakers use to build resilience, and show how data from events like the Iberian blackout can guide better preparation strategies. For a deeper look at these topics with real-world examples, watch our recent webinar on-demand, “Navigating Disruption: Best Practices for Resilient Digital Infrastructure.”

What Network Resilience Really Means

Most network operators measure success with basic statistics: how often their networks stay running and how quickly they fix problems when something breaks. Traditional network monitoring focuses on straightforward metrics like uptime (network availability), availability percentages (how often networks stay up), and mean time to repair (how fast problems are solved), but these measurements can miss the bigger picture during major outages.

During major incidents, understanding the true impact requires multiple perspectives that basic statistics like uptime or network availability alone can’t provide. Multiple data sources paint different pictures of the same event, and each tells a key part of the story:

  • Consumer reporting platforms like Downdetector® by Ookla® capture user-reported issues but depend on users having working connections to report problems. For a more detailed perspective on how outages impact organizations—and what’s truly at stake when services fail—see our white paper, The Cost of Downtime: A Guide to Proactive Outage Management
  • Background network scanning can reveal infrastructure failures but often provides retrospective rather than real-time insights
  • Operator dashboards track internal systems but often lack visibility into interdependent infrastructure like power grids
  • Government monitoring focuses on critical services like emergency communications, hospitals, or public safety systems, but may not capture broader economic impacts

The Iberian power grid collapse demonstrated these measurement challenges perfectly. Initial consumer reports spiked dramatically, then collapsed to near zero, not because service was restored, but because users lost the ability to report outages entirely. One operator in Portugal, MEO, maintained service longer than its competitors, an early sign of what resilience looks like in practice.

Analyzing Resilience: From Detection to to Communication to Learning

Major outages often unfold faster than anyone expects, and the difference between a temporary disruption and a systemic crisis lies in how effectively resilience is analyzed. In our recent webinar on network resilience, a practical framework was discussed that breaks resilience into five stages, each one critical to keeping disruptions from escalating:

  • Detect: Spot the first signs of trouble across multiple data sources, from outage reports to operator dashboards
  • Attribute: Identify the real root cause, whether it’s an internal software bug, an underwater cable cut, or a regional power failure
  • Communicate: Share timely, accurate information with stakeholders and the public to reduce confusion
  • Remediate: Act quickly to contain damage, restore critical services, and prevent cascading failures
  • Learn: Capture lessons from the event and feed them back into playbooks, exercises, and long-term resilience planning

This framework underscores that resilience is not only about preventing outages; it’s also about building the capacity to respond, adapt, and improve when disruptions inevitably occur.

Power Dependencies: The Hidden Single Point of Failure

Power grids and mobile networks may look like separate systems, but major outages reveal how tightly connected they are. People often expect their phones to keep working in a crisis, yet service can disappear quickly once the electricity that powers cell sites and core facilities is lost.

Mobile networks are built with distributed architecture and multiple layers of redundancy, yet the April 2024 grid collapse exposed a fundamental vulnerability: dependence on external power. When regional electricity failed, mobile site failures moved in near-perfect lockstep with the power grid collapse, leaving over half of subscribers without service in affected areas.

The outage revealed dramatic differences in operator preparedness strategies and their real-world impacts:

  • Battery deployment depth: Portuguese operator MEO’s extensive battery investments created a “flattened outage curve”—service degradation began later and peaked lower than competitors, buying critical time for restoration efforts
  • Core network protection: MEO maintained core network stability throughout the crisis, preventing a total service collapse that would have affected all subscribers simultaneously by investing in multi-day power autonomy
  • Geographic redundancy: One competitor with centralized core infrastructure and a lack of geo-redundancy and power resilience saw its entire subscriber base go offline when its main facility in Lisbon lost power
  • Backup power at cell sites: MEO’s six-hour battery capacity at most mobile sites provided meaningful service continuity, while some competitors with minimal backup power saw more immediate failures

During the crisis, roaming traffic on MEO’s network increased threefold as subscribers from failed networks automatically switched to available alternatives. MEO’s battery investments prevented total network collapse and provided backup connectivity for competitors’ customers during the extended outage.

Cascade Effects: How Failures Multiply Across Digital Infrastructure

Apps and services may seem independent, but they often share hidden connections through common cloud platforms, authentication systems, or payment providers. Single points of failure in shared infrastructure can trigger cascading outages that extend far beyond the original problem. Cloud platform outages demonstrate how interconnected modern digital services have become, with failures at major providers like Google Cloud Platform and CloudFlare affecting thousands of downstream applications and services.

Recent cloud incidents reveal several common failure patterns that amplify initial problems:

  • Management system failures: Many big cloud outages don’t come from the servers themselves going dark, but from the control systems that keep everything running. When those fail, it can knock out multiple services at once across different regions
  • When one outage triggers others: Services like Spotify and Snapchat, which rely on Google Cloud infrastructure, become unavailable during Google’s outages, even though their own systems function properly
  • Misidentifying the cause: Initial incident reports often misidentify root causes, leading to delayed or misdirected response efforts until proper analysis reveals the true source

These interconnected failures show how cloud outages can rapidly spread beyond their original source. When critical shared infrastructure fails, the impact can multiply across all the services that depend on it.

Crisis Response: Building Effective First-Hour Playbooks

The difference between manageable incidents and prolonged outages often comes down to what happens in the first hour after problems begin. MEO’s response to the Portugal power grid failure demonstrates how preparation and automated systems enable rapid crisis management even during unprecedented events.

Effective incident response relies on several key components that must be tested and refined before emergencies occur:

  • Automated alerting and dashboards: MEO declared crisis status within 23 minutes of initial power grid failure because monitoring systems provided immediate impact assessment across fixed and mobile networks
  • Regular disaster recovery exercises: Although the power grid scenario hadn’t been specifically tested, frequent tabletop and live exercises prepared response teams for rapid decision-making under pressure
  • Prioritizing critical infrastructure: Maintaining stable core infrastructure prevented a total service collapse, allowing network-level management even as individual sites lost power
  • Site-by-site damage assessment: Automated systems tracked how much backup power remained at each site, enabling strategic resource allocation during extended outages

MEO’s systematic approach during an unprecedented crisis shows that regular disaster exercises prepare teams for rapid decision-making when events turn out worse than expected. Even without testing the exact power grid failure scenario, MEO’s established processes enabled coordinated resource management under extreme pressure.

Policy Interventions That Drive Real Results

Effective resilience policies require more than regulatory requirements; they need funding mechanisms, technical standards, and international coordination to address the cross-border nature of modern digital infrastructure. Several countries have developed comprehensive approaches that combine multiple policy tools to improve network resilience outcomes, including Australia, Estonia, Finland, Colombia, and Japan.

Australia tackled resilience with a funding-first approach, using public investment to encourage operators to harden networks and explore new technologies:

  • Direct infrastructure funding: Government programs support operators in deploying redundant infrastructure and network hardening measures that might otherwise be economically challenging
  • Research and development support: Separate funding streams promote innovation in resilience technologies, from satellite backup systems to advanced battery technologies
  • Geographic diversity requirements: Policies encourage infrastructure deployment in multiple regions to reduce single points of failure

In countries like Estonia, Finland, and Colombia, regulators have taken a mandate-driven approach, setting technical requirements operators must meet:

  • Independent power source requirements: Regulations specify minimum battery backup duration and geographic coverage for critical network components
  • Emergency power unit standards: Technical specifications ensure backup systems can actually maintain service during extended outages
  • Essential component resilience: Regulatory standards in all three countries require critical network infrastructure to withstand specific types of disruptions (like extended power loss, physical damage, or cyber incidents)

Japan has focused on disaster preparedness, investing in satellite-based backup systems and supporting technologies suited to a country prone to earthquakes and severe weather:

  • Satellite backup integration: Policies encourage operators to deploy satellite connectivity as a safeguard during large-scale disasters like earthquakes
  • Targeted technology investment: Policymakers support research into backup solutions such as low Earth orbit (LEO) satellites, drones, and ships acting as base stations and alternative power systems to ensure continuity in disaster-prone regions

These examples show that there’s no single blueprint for resilience. Funding, mandates, and targeted technology programs can all play a role. What matters is aligning policy tools with national vulnerabilities, while recognizing that outages rarely stop at borders. The strongest results come when technical standards, public investment, and innovation work together to keep networks running through disruption.

Supply Chain Resilience: Managing Dependencies You Don’t Control

Supply chain resilience has become a pressing challenge as organizations move away from running every system in-house. With digital transformation, much of that control has shifted to cloud platforms, managed service providers, and software vendors. The change brings flexibility and scale, but it also creates a web of dependencies that are hard to map in normal times and nearly impossible to control during a major outage.

Effective supply chain risk management requires systematic ways to understand and manage the dependencies created by cloud providers, managed services, and third-party software vendors:

  • Due diligence frameworks: Organizations must assess cybersecurity practices, business continuity plans, and resilience capabilities of critical suppliers before committing to rely on them
  • Contractual accountability measures: Service level agreements (SLAs) need specific resilience metrics and clear remediation requirements, not just general availability targets
  • Ongoing measurement and monitoring: Organizations should regularly assess supplier performance against agreed standards, including tests of backup procedures and incident response capabilities
  • Cascading requirements: Suppliers should demonstrate that they hold their own critical vendors to the same resilience standards, extending accountability throughout the supply chain

During the CrowdStrike incident, affected organizations couldn’t simply point to their software vendor; instead, they had to manage customer impacts even though the root cause was completely outside their control. Modern supply chain resilience requires organizations to plan for failures in dependencies they cannot directly control while maintaining clear accountability for service delivery.

Building Networks That Bend Without Breaking

Network resilience has evolved from an operational concern to a strategic imperative that affects entire economies and societies. Recent major outages, whether caused by power grid failures or software incidents, show that traditional approaches centered on individual network components often fail to capture the systemic nature of modern digital infrastructure. 

Building true resilience means preparing for failures across every layer of dependency, including power grids, software supply chains, and international infrastructure connections. The most effective strategies combine technical investments, policy frameworks, and organizational preparation. MEO’s performance during the Iberian power crisis illustrates how battery deployment and core network protection can reduce impacts, while national policies that pair funding, standards, and international coordination address challenges no single operator can solve alone.

Future resilience will depend on recognizing that no organization controls every factor affecting service reliability. Networks that bend without breaking require preparation, investment, and coordination, and recent events show these efforts can sharply reduce the human and economic costs when disruptions inevitably occur.

To explore the economic and operational stakes of major disruptions, read our white paper, The Cost of Downtime: A Guide to Proactive Outage Management. And for strategies organizations are using to improve resilience, watch our on-demand webinar, Navigating Disruption: Best Practices for Resilient Digital Infrastructure.

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.

| August 5, 2025

How States and Local Governments Track Broadband Progress with Ookla’s ArcGIS Online Dashboards

Internet service has joined the ranks of essential utilities; it’s foundational to economic development, public services, and everyday life. But when it comes to understanding which areas have fast, reliable internet—and which don’t—many public sector teams are still relying on outdated information, especially federal coverage maps. Accurate, location-based performance data has been hard to come by, and even harder to put to use.

That’s where Ookla’s new ArcGIS-based dashboards come in. Built for use at every level of government—from state agencies to local broadband offices—these tools combine Ookla’s rolling 12-month Speedtest Intelligence data with powerful visualization capabilities inside Esri’s ArcGIS environment. Because they’re built on hosted layers that integrate directly with ArcGIS Online, agencies can access up-to-date connectivity data without manual uploads or custom integrations—saving time and fitting into workflows many teams already use. 

From there, users can explore broadband performance by state, county, or block group (a U.S. Census-defined area used for broadband funding decisions), track monthly and year-over-year changes, and compare progress across regions—all without needing deep GIS expertise.

In this article, we’ll explore what the dashboards offer, how they’re helping public-sector teams move beyond static maps, and what early adopters like Hawaii and West Virginia are already doing with them. We’ll also look at how the dashboards are evolving to support mobile data and more flexible reporting options.

Why Better Data Visualization Matters for Broadband Planning

Public sector broadband teams—whether overseeing a small town or an entire state—face growing pressure to show where funding is needed and where progress is happening. Speedtest Intelligence provides rich, real-world performance data, and the ArcGIS dashboards help teams tap into that depth more efficiently. By simplifying access and visualization, the dashboards make it easier to explore trends, spot gaps, and communicate insights across geographies and time periods. 

Here’s a closer look at how the dashboards are structured and what they include:

  • Hosted in ArcGIS Online: Dashboards are delivered through ArcGIS Online and built on Ookla’s rolling 12-month median Speedtest data
  • Core metrics included: Download and upload speeds, latency, jitter, test counts, and device counts
  • Flexible geographic views: Users can explore data across state, county, and block group geographies, depending on their needs
  • Updated monthly: Dashboards provide near-real-time insights into performance trends
  • Custom filtering and breakdowns: Visualizations support comparisons and speed breakouts by percentile (e.g., 90th, median, 10th percentile download and upload speeds)
  • FCC threshold analysis: Clients can easily identify which areas meet (or fall short of) the FCC’s 100/20 Mbps benchmark for download and upload speeds.

Together, these tools help government teams to monitor broadband conditions, identify gaps, and support data-driven decisions for stakeholders and legislators.

Drilling Into the Details: Four Dashboards That Work Together

Broadband offices and policy makers need more than just numbers—they need clarity around where things are working, where they’re not, and how performance aligns with goals that matter for their communities. Understanding broadband performance means more than looking at speed metrics alone—it requires a holistic view of how connectivity performance varies across regions, how it changes over time, and whether it meets the goals tied to funding and planning. 

That’s why the ArcGIS dashboards include four focused views that help teams across all levels of government track progress, compare regions, and evaluate service against meaningful benchmarks.

  • Monthly Trends Dashboard: View median speeds, latency, jitter, test counts, and device counts across a rolling 12-month window. Users can filter by state, county, or block group.
  • Year-over-Year Change Dashboard: Compare current performance to previous years to see where speeds, latency, or jitter have improved—or declined. Data is broken out by geography and metric.
  • State Comparison Dashboard: Benchmark one state’s connectivity against others in the same region, division, or across the U.S. This view is especially useful for supporting economic development and funding decisions.
  • FCC Comparison Dashboard: Overlay Ookla Speedtest data with FCC broadband coverage data to flag areas where federal maps may overstate service availability.

Each view is designed to answer specific questions quickly: How are we trending? How do we compare to other states? And where do we see discrepancies that might affect funding or reporting? These supporting dashboards turn complex performance data into actionable insights for states, cities, and counties alike.

Spotting the Gaps: Where FCC Maps and Real-World Performance Don’t Align

Accurate broadband coverage data is critical when millions in funding are on the line. For broadband offices tasked with allocating those dollars, having trusted information is essential, but government maps don’t always reflect what people are actually experiencing. One of the most powerful features of Ookla’s ArcGIS dashboards is their ability to highlight the gaps between reported coverage and real-world performance in a clear, actionable way.

The dashboards make it easy to spot those discrepancies and dig into the details at a granular level:

  • Hexagon-based overlays: Dashboards use hex-shaped grids to show which areas meet or fall below the FCC’s 100/20 Mbps threshold
  • Drill-down insights: Users can click into individual hexes to view Speedtest performance and test counts
  • Custom filters: Teams can isolate areas by number of tests, performance thresholds, or number of Broadband Serviceable Locations (BSLs)
  • Flagging discrepancies: Differences between reported coverage and actual Speedtest results can help identify areas for ISP follow-up or further investigation
  • Exportable data: Insights can be shared in reports or integrated into state systems for further analysis

FCC broadband data still plays a central role in determining where federal funds are allocated—but it’s not always accurate. By layering Speedtest Intelligence data on top of government coverage maps, the dashboards give broadband teams a clearer way to validate service claims and advocate for resources where they’re truly needed.

Early Adoption in the Real-World 

Hawaii and West Virginia are already testing these dashboards, marking the start of broader adoption. For states working to identify coverage gaps, benchmark performance, and clearly communicate broadband progress, these tools make it easier to explore the data, surface key insights, and share results with the people who need to see them.

Feedback from Hawaii and West Virginia is already shaping product development, with updates underway to support mobile datasets and more flexible, exportable reporting options:

  • Fixed data available now: Fixed broadband data dashboards are already in use; mobile versions are in development
  • Geography-based filtering: Dashboards are delivered pre-filtered to each state’s geography
  • No advanced GIS skills needed: Designed for ease of use by teams without dedicated GIS staff
  • Public-facing options: Results can be embedded into websites or shared with lawmakers and stakeholders
  • Exportable reports coming soon: PDF and Word formats can help teams share insights without needing a full ArcGIS license

The dashboards are powered by Speedtest’s hosted layers—monthly-updated datasets that plug directly into ArcGIS Online. These hosted layers make it easier for agencies to access and visualize Speedtest data without dealing with manual uploads or complex integrations. For teams already using ArcGIS to track broadband or demographic data, it’s a faster path to meaningful insight.

Looking ahead

Access to accurate, actionable broadband data has always been a challenge. With these ArcGIS dashboards, states and local governments finally have a clear view of connectivity across their communities, grounded in real-world performance data they can explore and use.

Whether the goal is identifying underserved areas, tracking progress over time, or communicating results to legislators and stakeholders, Ookla’s ArcGIS dashboards give teams the tools to act with confidence.

As more states adopt the platform and new capabilities roll out, these dashboards are quickly becoming a go-to resource for building broadband strategies that reflect what’s really happening on the ground. The dashboards were also recently featured during the 2025 Esri UC Plenary Session, underscoring their growing role in public-sector broadband planning (Esri account required to view).

To learn how our ArcGIS dashboards can support your broadband planning efforts—whether you’re running a statewide program or managing broadband efforts in a single community—reach out to our team.

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.

| July 31, 2025

Rethinking Indoor Connectivity: Why It Matters More Than Ever

Mobile networks have seen major upgrades in recent years—from 5G rollouts to expanded spectrum to denser infrastructure—but indoor coverage is still a major weak spot. Whether in office towers, schools, hospitals, or transportation hubs, buildings often block or weaken cellular signals, creating a frustrating experience for users and a missed opportunity for mobile network operators (MNOs). In many cases, indoor coverage gaps pose more than an inconvenience; they create real risks for public safety and limit economic potential for property owners.

This problem is nothing new, but in many developed markets, it’s actually getting worse. Our earlier research explored how a combination of higher frequency 5G spectrum (which struggles to penetrate buildings), newer construction materials like low-emissivity (low-E) glass, and the sunsetting of legacy 2G and 3G networks has deepened indoor coverage challenges. Meanwhile, mobile data usage continues to concentrate indoors, and networks built to prioritize outdoor coverage often don’t deliver the performance users now expect inside buildings.

In this article, we’ll break down the key challenges facing in-building mobile coverage, explore solutions, and show how Ookla’s data can help improve outcomes for consumers, operators, and property owners. For a deeper dive into these topics, watch our recent webinar on-demand, “Reimagining In-Building Cellular: Closing the Coverage Gap.”

The Data Gap: What Regulators Miss About Indoor Coverage

Accurate data is the foundation of good policy. But when it comes to indoor connectivity, many public maps and benchmarks focus on outdoor or predicted coverage and ignore what users actually experience once they step inside. 

Higher-frequency 5G spectrum, signal-blocking materials like low-E glass, and the shutdown of legacy networks have all made reliable in-building coverage harder to achieve in many developed markets.

Data from Ookla’s Cell Analytics platform highlights the scale of the problem. In cities like London and Paris, building-level data reveals large clusters of poor indoor performance, even in areas that draw large numbers of people and appear well-served on public maps. In many cases, users experience degraded 5G coverage and fallback to low-band spectrum that offers limited capacity, leading to a poorer quality of experience. This disconnect between perception and reality underscores several important points:

  • Traditional coverage maps often present an overly optimistic view of network performance—especially indoors—based on computer-modeled predictions rather than reflecting the actual signal conditions enjoyed by end users.
  • Crowdsourced data reveals large pockets of poor in-building coverage in major global cities.
  • These blind spots can lead to misaligned investments and missed opportunities to improve service where it’s most needed.
  • Without building-level insights, policymakers and operators lack the visibility required to close the indoor coverage gap.

Better indoor outcomes start with a more accurate understanding of what users actually experience. Without that understanding, it’s difficult to allocate funding or resources where they’ll make a difference.

Why Indoor Coverage is a Public Safety Issue

Dropped calls and dead zones inside buildings are more than a nuisance—they can be dangerous. In emergencies, people expect to reach help from anywhere, but many buildings still lack the coverage needed for reliable 911 (or equivalent) service. And as emergency response operations increasingly rely on mobile networks and broadband applications, buildings without reliable service could put lives at risk.

When people in distress cannot quickly reach emergency services, every second counts. A one-minute delay in dispatching help can increase cardiac arrest mortality rates by 1–2% and raise fire damage by up to 20%. FCC modeling during the E911 modernization effort found that improving vertical (z-axis) location accuracy—made possible partly through better indoor mobile coverage—could save thousands of lives each year,

Here’s why indoor connectivity matters for public safety—and what’s standing in the way:

  • Indoor coverage gaps can delay or prevent emergency calls. People expect to have mobile service everywhere—but many buildings don’t deliver it when it matters most.
  • First responders increasingly depend on mobile broadband—like apps, video, and real-time data—which all require strong indoor cellular coverage to work reliably.
  • Buildings that lack coverage can disrupt first responder communication and coordination.
  • Fire and building codes in many areas require indoor coverage for public safety radios (not general mobile service), but enforcement varies widelyBetter indoor coverage also helps 911 responders find people faster—especially in multi-story buildings. Today’s emergency systems use more advanced location technology, like device-based hybrid (DBH) methods, which combine GPS, Wi-Fi, and barometric sensors. These signals can now estimate not just your location on a map, but also what floor you’re on. As of April 2025, the FCC requires carriers to provide this vertical accuracy—within about 10 feet (or one floor)—for 80% of wireless 911 calls.

That level of precision can save critical time. If first responders know exactly where to go, they can reach people faster—often shaving a full minute off response times. In serious emergencies like cardiac arrests, where every second matters, that minute could save a life.

In-building coverage should be treated with the same urgency as other public safety infrastructure. Lives may depend on the ability to communicate from inside a building—whether by call, text, or other mobile tools.

New Models for Indoor Connectivity: The Rise of Shared Infrastructure

A new funding model is taking hold across the industry, with more venue owners now willing to foot the bill for in-building deployments as part of broader efforts to improve tenant experiences and stay competitive. With operators focused on outdoor network coverage and typically investing in custom in-building solutions only for the highest-profile venues (like stadiums), many building owners are realizing they’ll need to take the lead if they want better indoor coverage.

One solution gaining traction is the neutral host model, where a single shared infrastructure supports multiple mobile operators within a building. Instead of each carrier deploying its own system, a neutral host handles the design, installation, and operation—reducing cost and complexity for everyone involved. Key benefits of shared deployments include:

  • Neutral hosts design, build, and operate infrastructure that supports multiple MNOs through a single system.
  • Shared systems eliminate the inefficiencies (physical equipment and cost duplications) of carrier-by-carrier installations.
  • The model is particularly effective in transit systems, stadiums, airports, and other high-traffic venues where all operators need coverage and there are significant space constraints
  • Participation often hinges on securing an anchor tenant—an MNO willing to be the first onboard.

Neutral host systems reduce complexity while improving results for everyone involved. As demand grows, expect shared infrastructure to become the norm, not the exception.

The Building Owner Equation: What’s the ROI?

Even when building owners recognize the value of strong indoor connectivity, calculating the return on investment isn’t always straightforward. While features like upgraded lobbies or new HVAC systems have clear costs and resale value, cellular deployments can feel abstract by comparison.

Still, connectivity is increasingly a requirement for tenants—not a perk. With hybrid work schedules, hot-desking, and mobile-first workflows, workers now expect reliable coverage throughout the building—from shared lounges to meeting rooms to wherever they can take a call or join a video meeting. If a space can’t support consistent connectivity across both cellular and Wi-Fi, it becomes harder to attract and retain tenants.

As connectivity becomes a baseline expectation in modern workspaces, building owners face growing pressure to deliver. Here’s what that means in practice:

  • Tenants expect strong indoor coverage (both cellular and Wi-Fi) as part of a modern workspace.
  • Poor connectivity can influence leasing decisions and renewal rates.
  • Owners of mid-sized or lower-profile buildings are often underserved by MNOs—and may need to take the lead on providing connectivity.
  • Without benchmarks or transparency, it’s hard to know where a building stands—or how to improve.

Reliable connectivity increasingly factors into occupancy, retention, and tenant satisfaction. For owners, strong mobile coverage is becoming a basic competitive differentiator.

Policy Can Make or Break Progress

Technology alone won’t fix the indoor coverage problem. Regulation and planning play a critical role—and some countries are showing what works. Leading global markets like Singapore, South Korea, and Hong Kong have implemented policies that require mobile-ready infrastructure in new buildings as a condition of zoning approval. This ensures operators have access to deploy equipment without facing prohibitive delays or costs.

South Korea offers one of the most comprehensive policy approaches to indoor mobile coverage anywhere in the world. New building codes require in-building mobile infrastructure—like risers, conduit, power, and equipment rooms—for a wide range of structures, including high-rise buildings (16 floors or taller), large buildings over 1,000 square meters, any building with underground levels, apartment complexes with 500 or more units, and all subway stations.

The Korean government also sets clear coverage requirements. Every mobile operator must provide service at all subway stations and high-speed rail hubs using mid-band 3.5 GHz spectrum. To make sure performance matches expectations, public scorecards put serious weight on indoor results: about half of the testing in South Korea’s national 5G Quality Evaluation takes place inside buildings like malls, hospitals, and campuses. Carriers that underperform can face financial penalties and public callouts. Together, these policies ensure strong indoor coverage is built in from the start—and that operators are held accountable for delivering it.

That kind of clear policy framework offers a model for other markets to follow. For countries like the U.S. and those across Europe, there are several clear policy opportunities to help close the indoor coverage gap:

  • Require cellular-ready infrastructure (ducting, risers, equipment space) in building codes.
  • Expedite permitting for indoor mobile deployments in public buildings like schools and hospitals.
  • Encourage government facilities to adopt 5G and in-building solutions as part of national strategy.
  • Develop transparent coverage certification or ratings to drive competition and investment.
  • Support more flexible use of spectrum for shared or private indoor deployments.

The bottom line is that indoor coverage can’t be an afterthought in policy. Clear requirements and streamlined permitting are essential for creating long-term change.

How Ookla Is Helping Improve Indoor Connectivity

Ookla supports better in-building connectivity through a powerful set of tools that deliver actionable, real-world insights. These solutions help operators, regulators, and property owners understand performance at the building level—revealing where indoor coverage falls short and where investment is most needed. Here’s how each group is using Ookla’s data to drive better outcomes:

  • Operators use Cell Analytics and Speedtest Intelligence® to identify coverage gaps, prioritize in-building upgrades, optimize spectrum deployment, and validate improvements.
  • Regulators and policymakers rely on Ookla data to support evidence-based planning, improve public reporting, and track progress over time.
  • Building owners use Speedtest results and building-level insights to assess tenant experiences, benchmark performance, and guide connectivity investments.

Ookla’s insights into indoor connectivity continue to play a key role in helping the industry move beyond outdated assumptions and improve mobile performance where people really need it.

Looking Ahead: Closing the Indoor Coverage Gap

Indoor coverage is no longer a secondary concern. As more mobile activity happens inside buildings, strong indoor performance is now essential—for everything from emergency response to tenant satisfaction. Yet this critical area still suffers from outdated assumptions, inconsistent data, and underinvestment.

Fixing the problem requires a coordinated approach—one that brings together network operators, property owners, infrastructure providers, policymakers, and data partners. \With better visibility through tools like Cell Analytics and Speedtest Intelligence, there’s a real opportunity to target improvements where indoor connectivity continues to fall short.

To explore these topics in more detail, watch our full webinar on-demand. And stay tuned—more in-building connectivity research and insights are coming soon!

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.

| July 8, 2025

Airport Internet Isn’t Always Ready for Takeoff: A Global Look at Wi-Fi and Mobile Performance

Whether you’re streaming a show before boarding or trying to jump on a quick video call, airport internet can make or break your travel experience. But how well do major airports actually deliver the speeds travelers need—especially when thousands of devices are competing for signal at once?

To find out, we analyzed Speedtest® Intelligence data from 48 major airports around the world. We compared median mobile and Wi-Fi speeds at each airport against the FCC’s fixed broadband speed benchmark of 100 Mbps download and 20 Mbps upload—a widely recognized standard for high-quality internet. It’s the baseline we used to assess which airports are keeping up with modern connectivity demands—and which ones aren’t.

While this article highlights a few key takeaways from our analysis, the full report includes complete results for all 48 airports—along with regional comparisons, a look at the real-world challenges of airport connectivity, and insights from operators like Boingo on how networks are being designed and optimized.

Key Takeaways from the Report

  • Only three airports met the FCC benchmark on both Wi-Fi and mobile: Phoenix Sky Harbor (U.S.), Hangzhou Xiaoshan (China), and Toronto Pearson (Canada) each delivered median speeds of at least 100 Mbps download and 20 Mbps upload on both Wi-Fi and mobile. 
  • More airports met the benchmark on mobile than Wi-Fi: While 21 airports qualified on mobile, only 12 reached the same threshold on Wi-Fi—highlighting a performance gap between the two connection types.
  • Performance varied significantly by region—and even within regions: No airports in Europe or Latin America met the benchmark on either connection type, while many in North America and China did—especially on mobile. But even in high-performing regions, results weren’t guaranteed, reflecting real differences in infrastructure, spectrum use, and investment.
  • Some airports delivered excellent speeds—others, not even close: Istanbul topped 600 Mbps on mobile, and San Francisco pushed 200 Mbps on Wi‑Fi. Mexico City, on the other hand, fell below 20 Mbps on both—reminding travelers that airport internet quality can vary wildly across airports.
  • 5G performance varied widely across airports: Some global airports, like Istanbul, delivered median 5G download speeds approaching 1 Gbps. Others—like Indira Gandhi International in Delhi—barely cleared 20 Mbps, illustrating just how uneven 5G performance can be from airport to airport.

These findings only scratch the surface. The complete report explores what contributes to performance differences across airports—including structural and environmental challenges, spectrum congestion, and infrastructure limitations. It also includes full tables showing Wi-Fi and mobile speeds for all 48 airports, along with whether each one hit the benchmark of 100/20 Mbps. 

Below, we’ve included a preview of two tables from the report, highlighting a handful of airports that recorded some of the highest median download speeds in Q1 2025. Access the full report for complete results and deeper analysis.

Sample – Airport Wi-Fi Performance: Median Download Speed (Q1 2025)

Sample – Airport Mobile Performance: Median Download Speed (Q1 2025)

Ookla retains ownership of this article including all of the intellectual property rights, data, content graphs and analysis. This article may not be quoted, reproduced, distributed or published for any commercial purpose without prior consent. Members of the press and others using the findings in this article for non-commercial purposes are welcome to publicly share and link to report information with attribution to Ookla.