Data Congestion and New Pricing Models

The bottom line is that while the new technologies, new antenna types, and things such as data compression can increase the total available bandwidth in a given cell sector, the demand for data services is growing faster than what the technology can provide.

Every network operator with the exception of Sprint/Clearwire is moving to broadband data usage pricing models and away from all-you-can-eat data. If you have an iPhone, you still have access to unlimited data, but if you opt to tether it to a notebook or other external device, your all-you-can-eat pricing goes away. Why is this happening?

You can blame this on data hogs—that 3-4% of the wireless broadband population that accounts for nearly 50% of all wireless broadband usage—but it is also because wireless bandwidth is shared bandwidth. Unless it is well managed, data hogs end up with most of the bandwidth and the rest of us with very little. Data usage has increased more than 5,000% over the last two years and there is no sign of a let-up. A large percentage of wireless broadband traffic (some say up to 40%) is video, which is increasingly popular and requires a lot of bandwidth.

On a wireless network, bandwidth is measured on a cell site or cell sector basis. Typical cell sites are built to serve three 120-degree areas. Each of these areas is referred to as a sector and the total bandwidth available within each sector is dependent on the technology used (2.5G, 3G, and now 4G), the amount of spectrum available for that cell sector, and the amount of backhaul capacity to transport the data to and from the cell site to the network core. But other factors affect total data speeds available, including how far the device is from the cell tower. The further away, out toward what is called the cell edge, the less data speed there is available.

There are a number of ways to manage the available bandwidth within a cell sector. In early systems, each customer had full access to all of the bandwidth in a given sector. With the newer systems, customers can be assigned a peak allowable data speed which, in essence, is a way to cap their data usage since less data can be downloaded per minute at 1 Mbps than at 5 Mbps. In addition to employing newer technologies to increase data speeds and capacities, network operators are also using new antenna technologies that “aim” the signal toward a specific user, or by using two or more antennas (MIMO), they gain some additional capacity and increase the available speed at the edge of the cell.

The bottom line is that while the new technologies, new antenna types, and things such as data compression can increase the total available bandwidth in a given cell sector, the demand for data services is growing faster than what the technology can provide. The trick, then, is to use all of the technology tools that are available along with management tools including pricing. If we look at the cell sector as a service area and if everyone has an unlimited data plan, the more people who enter that sector and request service, the slower the system will be for everyone. Generally, if everyone who is using broadband data within a cell sector is taking care of email and surfing the web, the resultant speed reductions won’t be very noticeable. However, if several of the users are receiving and/or sending large amounts of data or video, the speed decreases will be noticeable to everyone.

Managing wireless bandwidth is part science and part art. A network operator’s goal is to provide as uniform a user experience as possible to all of its customers and to be able to serve as many customers as possible most of the time. We all have heard the stories about AT&T’s network and its reportedly poor 3G service in major metro areas because of the high demand placed on the network by iPhone customers who have become heavy data users. There are only a few ways to increase the capacity of a network: You can add more spectrum at each site, if there is some spectrum available; you can build more sites, which typically takes several years of planning, permitting, and building, so this is not an overnight fix; you can start using femtocells that are similar to Wi-Fi access points but are for wide-area networks to improve in-building coverage; and you can move users off the wide-area network (or Wi-Fi as T-Mobile does); and, of course, work toward off-loading the network by providing Wi-Fi service in certain areas as AT&T is doing in Times Square in New York.

But even with all of these options, there will still be data abusers who are only concerned about their own ability to use as much data as they want and don’t care if they are having an impact on other users on the network. This attitude is interesting since when they are operating in a wired environment and pay for unlimited data access, they are limited by the speed they pay for. Typical DSL and cable providers charge different amounts for different speed levels, which equates to the amount of data available to a user in a given period of time. Cable modem service is also a shared service, like wireless. The more customers that use cable broadband in a given area, the slower the data rate for all of the customers—but not to the extent that access is totally denied to anyone. With wireless, it is possible that the last ones into a cell sector won’t have service.

The final method for managing bandwidth and capacity usage is what we are seeing from network operators today. They are doing away with all-you-can-eat data plans in favor of plans based on usage. One network, Clear/Sprint, recently stated that it will not move away from its all-you-can-eat pricing at the moment, which is understandable since it has more spectrum than the other operators AND it doesn’t begin to have the customer base or number of devices being serviced by the other operators. In fact, Sprint just launched its first 3G/4G smartphone within the past few weeks.

Pricing is moving toward data usage models and typical plans are for 250 MB or 5 GB of data. They are still based on a flat rate up to the maximum allowed and then an overage price kicks in. When LTE is launched, there will probably be other pricing models as well. LTE enables operators to use what is called “quality of service” (QoS), which means data speeds can be varied based on usage. For example, we might see pricing based on 2 Mbps service, 5 Mbps service, or for those who want the best, say 10 Mbps service. The rate you pay will depend on the data speed you choose, much more like the wired and cable models we are accustomed to. However, I also expect to see monthly caps on data amounts and perhaps another pricing step that would be invoked for large file downloads that might work something like this: If you want to download 20 MB of data at 2 p.m. in the afternoon, you might have to pay an extra $5 for that download, but if you defer the download until 2 a.m. when network loading is much lighter, the download would be free.

The idea of Net Neutrality flies in the face of network management by the operators. In the strict sense of the term, Net Neutrality would require the operators to provide the same level of service to everyone, which would allow data hogs to continue to limit access to broadband data services for the rest of us. At the end of the day, it is important that those using broadband services who are not conversant with the differences between wired and wireless access come to realize that wireless, regardless of the technology, is shared bandwidth and, more importantly, it is shared within cell site sectors.

All of us want the fastest possible access to broadband services and we all want to be able to use these services no matter where we are. In order to do that, we must understand that network operators have to be able to manage their networks efficiently and with an even hand. New build-outs and technology help, but they need other tools to make sure there is bandwidth available for as many customers as possible. Regulating usage through pricing is the only other tool network operators have to help them continue to provide us with the services we have come to expect.

Andrew M. Seybold

3 Comments on “Data Congestion and New Pricing Models”

  1. In the data- and algorithm-driven Carrier world, one could also maintain profiles for each user, and cap usage by user as well as at peak hours. Folks would better understand “2.5G only, at peak hours” than “even though you pay a lot you’ll only get crawl sometimes”. This could also be carefully ratcheted in real time, to preserve access for 20 kbps voice traffic by incrementally reducing the video users during peak access times. Another interesting variable is the mobility of the big data users. I’d venture a guess that most of the big pipe smokers are stationary, which presents other opportunities, particularly in conjunction with smart antennas, for optimization.

    I love it when capitalism gets around to the cost instead of just the price.

  2. Peter–you are correct–allof what you are suggesting could be done but not sure that network operators want to be in the business of tracking individual customers data usage–the billing method appears to be a cleaner way to me. And I agree that most of the very heavy users do so when they are stationary–making it even harder on wireless networks since if a number of data hogs are in the same cell sector and don’t move around then that cell sector is vunerable to blocking and poor service.

  3. DanC says:

    Maybe I’m missing something here. I thought Net Neutrality was about making sure the carriers don’t discriminate against content providers (sort of like Equal Access). I would think “offering the same service to all” could be claimed with a price plan that distinguishes based on traffic. No one is being denied service, they’re just being required to pay for it. I would think the FCC would recognize and embrace these kinds of pricing plans as a market-based solution to an allocation problem.

You must be logged in to comment or reply.