And critically important since the only way to make some of the wisest policy decisions will be based on what infrastructure truly exists for consumer use in all parts of our country – if that task is actually possible or if we’ll have to settle for “close enough”. NTCA has a group of members, with far more insight than I have, working together to explore possible, accurate and workable solutions. That is why I wanted to share some insights and perspective on this topic from NTCA’s Senior Vice President of Industry Affairs and Business Development below:
WHY IT’S IMPORTANT TO GET BROADBAND MAPS RIGHT – AND WHAT TO DO GIVEN THAT THE MAPS WILL ACTUALLY NEVER, EVER BE RIGHT
Everyone hates broadband maps. Everyone mistrusts or distrusts them. They show services where everyone in a neighborhood knows services aren’t available. They show speeds that customers and competitors alike know can’t be delivered consistently. Why are we at this point where everyone feels that nothing is to believed? There are several reasons.
First, attempts at developing better maps have been sporadic and scattered. Nearly a decade ago, Congress gave the U.S. Department of Commerce funds to go work with states on mapping – and in many states, this process reportedly went relatively well (although in others, concerns were raised as to completeness and accuracy). The collected, aggregated result was the National Broadband Map. Even today, despite the fact that this map hasn’t been updated since 2014 (and was even finally and formally “decommissioned” late last year), some stakeholders still refer to the National Broadband Map in talking about using information to make decisions about where to direct broadband funding. Those that do so might want to log onto their America Online account and Ask Jeeves for a different search result, however, if they’re looking for more current information on broadband availability.
Since this time, some states have continued individualized mapping efforts, and the FCC meanwhile has taken on the responsibility of using broadband providers’ “Form 477” reports to develop an interactive broadband map of its own. The Department of Commerce even got a new infusion of funds to help coordinate development of better mapping information last year. In short, there have been lots of cooks in lots of kitchens all trying to cook up a comprehensive set of mapping data – each, however, with its own set of shortcomings and none of which necessarily corresponds to or coordinates with efforts by other entities.
Second, even as such efforts to map broadband better have been made by the FCC, the Department of Commerce, and various states, the fact is that, in nearly every case, the data ultimately must come from service providers themselves – and, from what we can tell, no one is verifying that data beyond perhaps cursory review for internal inconsistency or some oddity in reporting between periods. Thus, if Fixed Broadband Provider X consistently reports 25 Mbps service to a crop circle that is two-thirds of the geography of the State of Washington – congrats, you’re on the map, Provider X! If a Mobile Broadband Provider Y reports that it has 4G LTE service available nearly everywhere in Minnesota – congrats, you’re on the map, Provider Y! There may be cases in which a state or other entity is able to follow up to confirm independently whether the reports match reality on the ground, but to our knowledge again, those cases are few and far between at best. So, in the end, the maps say essentially whatever the providers say.
Third, the standards for reporting on mapping vary and make it hard to get at actual availability. Let’s take the FCC’s Form 477 data, for example. To be clear, the FCC’s data probably represent the best, most current data available today on broadband availability. But the standards for reporting hardly inspire outright confidence in the results. Here is the instruction on fixed broadband deployment reporting:
For purposes of this form, fixed broadband connections are available in a census block if the provider does, or could, within a service interval that is typical for that type of connection—that is, without an extraordinary commitment of resources—provision two-way data transmission to and from the Internet with advertised speeds exceeding 200 kbps in at least one direction to end-user premises in the census block.
Let’s unpack that instruction step-by-step. A provider can report fixed broadband as being available throughout a census block if:
Put another way, if a provider advertises the service and can turn up service in 2 weeks or so to just one location in a geography that could range from just under an acre to several miles, that’s good enough to be “available” on the Form 477 data and the ensuing map. Notably, it doesn’t matter if the service actually is realized at the advertised speed by the customer once service is turned up – that might be a problem in terms of false advertising that gets the provider in hot water with the Federal Trade Commission or a state attorney general, but when it comes to FCC reporting, that doesn’t seem to be a problem on the face of the reporting instructions. Similar problems, of course, have been reported in terms of actual vs available speeds in the mobility context, with members of Congress on both sides of the aisle railing against the alleged inaccuracy of Form 477 data and the contrast between marketing claims and “facts on the ground.”
Fourth, it’s important to note that granularity does not equal accuracy. What’s meant by this? Well, lots of people often focus on the first of the 3 factors noted above in the fixed broadband deployment reporting instructions – the fact that service to only one location in a census block results in an entire geography being rendered as “served.” These entities suggest that if this could just be fixed, this would solve everything. While an understandable source of frustration, a singular focus on this factor is tantamount to patching a pipe with multiple leaks in only one place. Getting down to the road segment or an address or a geocode will certainly help in reducing potential “false positives” due specifically to geographic “overstatement,” but it won’t solve any of the other concerns noted above – that providers are self-reporting coverage without any in-depth independent validation, that providers can report places where they could in theory turn up service but actually haven’t done so, and that they might be advertising and thus reporting speeds that they can’t deliver.
Fifth, the reality is every map is going to be outdated the moment it’s published. Even if maps were turned around at a remarkable speed after reporting from providers or gathering by other means, the best maps would still almost certainly be months behind by the time released to the public. Providers keep building – and retreating. Networks get deployed in areas they weren’t, speeds get upgraded on networks already in place, and providers also at times go out of business or exit areas they once served. Thus, any map is destined for temporal inaccuracy in places no matter what we do.
What’s the answer then?
Well, this isn’t to say that we should throw up our hands and do nothing. For example, we absolutely should try to get more granular in terms of data. Allowing customers to look up their address in a database and get more accurate information about broadband availability is extremely important. The issue, however, is to strike a balance – as noted above, getting granular down to a microscopic level of coverage still won’t solve all concerns about accurately identifying availability. Especially considering all of the other concerns, there is almost certainly a diminishing return on getting too granular in terms of burdens on providers and even the agencies that need to administer this information.
But, then, what to do if granularity alone won’t instill unshakable confidence in where broadband is and is not? The process for improvement should start from scoping the problems to solve – and here’s where it gets tricky. What are the other problems? As noted above, they include at least: (1) potential mismatches between advertised and actual speeds; (2) wiggle room between what in theory could be delivered sometime soon and what is available now; (3) multiple cooks (federal and state agencies) in the kitchen; (4) a lack of independent verification of self-reported data; and (5) timing concerns in terms of what’s shown versus what’s happened since the data were gathered.
Unfortunately, we aren’t going to solve many of these problems. If we’re being honest, we simply aren’t. To take the issues noted above one at a time: (1) while the threat of false advertising enforcement may deter providers, no one is ever going to be verifying ex ante the extent to which a broadband provider’s advertised speed on a map matches its actual speeds; (3) practically speaking, no single agency is going to be given the “keys to the kingdom” on broadband mapping anytime soon; (4) related to the first point, there isn’t enough money for independent verification of every data point every provider submits, or even perhaps a significant sampling of that data; and (5) there’s no map that’s ever going to capture “real time” what broadband networks are being built or upgraded (or decommissioned). (You’ll notice this list skips the second issue above – the wiggle room that exists between what could be delivered theoretically in two weeks and what is actually available right now. More thought is needed on how to approach this issue, striking a balance between an actual ability to deliver service quickly and a subjective judgment about the possibility of delivering service.)
This then charts out what might be the best option among many bad ones – essentially, we should get the maps as good as wecan, and then we make sure someone is vetting them at least before they are used to make policy decisions. What does this mean in practice? It certainly means moving toward a more granular map that gives consumers, stakeholders, and policymakers a better depiction of where broadband is and is not, all while recognizing that the map still won’t be perfect. This means that we can and should improve on census block-based reporting, but we shouldn’t go so far as to impose massive new burdens on providers or the agencies that track this stuff because the map still won’t be entirely right. In the end, however, it also means we absolutely need a challenge process. Assuming we’re not going to have a single agency gather all the data, assuming we’re not going to have that agency vet all the data before it’s used to make decisions on things like where funding should and should not go, and assuming we’re not going to magically develop a real-time mapping capability for customers being turned up with better service (or losing service), there is no way to make good decisions without a challenge process.
In short, we need to treat the improved map as informative but not dispositive. We need agencies to use something like the Form 477 data (or their own data sources) only as a baseline for funding decisions, but then to permit challenges to show where the mapping baseline data are inaccurate and to adjust the data to reflect actual conditions on the ground. Without a challenge process, we’re always going to be placing blind and absolute trust in imperfect and imprecise tools. Without a challenge process, we’re always going to be at risk of funding networks in areas where broadband already exists and denying funding for broadband in areas where broadband is absent. If we want to do better by our consumers and if we want to make more effective use of resources to further broadband deployment, the combination of a better (but still imperfect) map and a meaningful challenge process offers the only roadmap for real success. It’s time to stop pretending that a perfect broadband map is achievable, and to start thinking about practical solutions – like a combination of improved maps and challenge processes – that will work in and better reflect the real world.