GoogleNetThursday, October 15, 2009
As I began thinking about the issues with network capacity, I realized how detrimental the FCC's Net Neutrality rules would be to companies such as Google. But interestingly enough, Google is one of the companies pushing for Net Neutrality. Why is this? Let me explain.
After listening to FCC Chairman Julius Genachowski's speech at the Brookings Institute a couple of weeks ago that described how the FCC will be publishing a Notice of Proposed Rulemaking that will embrace its six "key" net neutrality concerns, I began to get nervous about the outcome of this rulemaking for wired Internet providers, wireless operators, and the public safety community. In his second speech, delivered at the CTIA show in San Diego last week, he did say that the FCC understands that the wireless Internet is different from the wired Internet and, therefore, the rules should reflect these differences. While this may have come as good news to many within the wireless community, I have to admit that I am not all that sure how the ruling will turn out. The comments are yet to be received and there is sure to be a lot of lobbying before the Commissioners, who are, after all, attorneys and not technology experts, come to their conclusions.
My concerns are that those in the position to make the rules that will affect "The Internet"--wired and wireless--do not understand the differences between wired and wireless access. Just as importantly, there are also implications for wired Internet Service Providers (ISPs) that may not be readily apparent and even more implications for wireless network operators.
What Is Net Neutrality?
This question will be answered differently by different people, but my understanding is that Net Neutrality is designed to provide a level playing field for all who want to use the Internet. My neighbor and I have the same access "rights" as everyone else, and the rulemaking is intended to protect our rights. However, in concept, Net Neutrality seems to imply unlimited and unfettered access to the Internet regardless of the means of access.
Before we go any further, we really need to look at the three pieces of the Internet. The first piece is the access, or the on and off ramps provided by ISPs-a telco offering DSL or fiber access, a Wi-Fi network, or a wide-area wireless network. The next piece is the back-end system made up of thousands of miles of wire, fiber, and microwave that serve all of these on and off ramps. The final piece is the connection between the Internet and the website. Most of the larger websites have multiple access routes to and from the Internet with multiple servers, sometimes spread out around the world, that are scaled to handle the traffic directed to their site.
When I look at any wired or wireless network, I look to see where the choke points are. The connection from the user to the Internet is provided by an ISP that controls the capacity and speed provided to the customer and is, therefore, the first choke point in the end-to-end communications circuit that provides access to information on the Internet.
For example, an 802.11N router in a coffee shop theoretically provides data speeds of up to 54 Mbps and sufficient capacity to handle multiple customers at the same time. However, the connection from the router to the Internet is probably a T-1 line with a maximum data speed of 1.54 Mbps. In this case, the T-1 line is the choke point. Regardless of how much speed is available via the Wi-Fi router and regardless of how many people are using it to access the Internet, the maximum available throughput is only 1.54 Mbps.
Each of the three pieces of the Internet puzzle can be viewed as a choke point. We have all heard of websites becoming overloaded with traffic or suffering from a denial of service attack. In this case, the website itself is the choke point. And we have all experienced varying speeds in our own access to the Internet. Using cable as an example, customers share the cable's bandwidth with all of their neighbors. The more people using the cable at the same time to access the Internet, and what they are using it for (email, chat, searches, audio and video downloads and uploads), affects the capacity and speed available for the others who are sharing the same bandwidth. The cable can become a choke point when there is a lot of activity. If you have a DSL line and you pay for 3 Mbps down and 1 Mbps up, you might think you will always be able to connect at a data speed of 3 Mbps. However, all of the DSL circuits for a certain area are aggregated to reach this speed and are connected to the Internet through the ISP's system. Depending on usage at any particular time, the DSL connection or ISP can be a choke point.
These speed and capacity issues can be managed to some degree. The first type of management is through data speed and capacity price plans. My DSL service provider offers three different speeds at three different prices. With the lowest tier, my on and off ramp will be slower than if I opt for the higher, more expensive tier. But my ISP can only mange its own on and off ramps and it cannot manage the data once it leaves its system and is routed onto the main Internet. You might think of this as an on ramp to a freeway that has metering lights to regulate the flow of traffic and give everyone a chance to merge onto the main highway. But if the road is already congested and the traffic is not moving ahead of you, even when your turn comes to merge, you cannot do so quickly and easily so you have to join the already crowded lanes and move forward more slowly.
Many of my concerns have to do with managing the information flow onto and off of the Internet, which, using pricing, data limits, or a combination of the two, can be managed to the degree that all customers are treated the same. For example, if a cable company offers data rates of 3 Mbps and during peak times this rate drops to 1 Mbps or lower for one subscriber in a given area, it should drop for all of the other subscribers as well-this, to me, is managed Net Neutrality. The first one on a system should not be able to hog all of the bandwidth, leaving little or none for the other customers. The way I interpret the FCC Chairman's Net Neutrality statements, it might become more difficult for any ISP to manage its customers' traffic and make sure there is fair and equitable distribution. Several cable companies have already been taken to task because they have used speed and capacity as ways to slow down data hogs on their networks so the rest of their customers can receive a decent level of service.
Predictions about running out of Internet capacity are real and should be a concern to all. Google stated last year that 40% of all Internet traffic is now video, the advent of the iPhone has increased the amount of Internet traffic that is wireless, and the curve chart that shows capacity versus traffic is indicating that capacity will fall behind demand in 2012. Many people tell me this won't happen because those who build the digital roads that are the Internet (not the on and off ramps) will continue to expand the capacity. I am not at all sure this is a correct assumption since, for the most part, they have been relegated to dumb pipes or bit haulers and there is very little incentive for them to build more capacity when the return on investment is so low. The Internet itself is not managed per se. It is a complex network of networks, routing packets of data in many different directions around the world. If demand exceeds capacity, there is nothing anyone will be able to do when it comes to managing who retains full access and whose access becomes limited for periods of time. Managing access is one thing, but if you cannot manage the main arteries that make up the Internet, there will be issues for all of us. If the FCC limits the degree to which ISPs can regulate their own access to the Internet, the situation could get worse more quickly.
There are a number of differences between wired and wireless access to the Internet. DSL, cable, modems, and other wired access devices are used only to move data onto and off of the Internet. Wireless networks are used to carry voice, SMS, MMS, and other forms of information in addition to moving data onto and off of the Internet. Managing wireless bandwidth is not only about managing access to the Internet, it is also about managing bandwidth for all of the other services and making sure that all wireless customers in a given area have as much access as possible. You may remember that when we were basically a wired voice telephone world, circuit overloads were commonplace on Mother's Day and other holidays. Today there are still circuit overloads, but they are occurring at cell sites or within cell sectors and they affect most forms of wireless communications.
For the most part, our current digital voice technologies use different portions of spectrum than broadband data, but as we move into a more 3G and 4G environment, at some point, voice will become data bytes and will travel in the same spectrum as data. This, too, will increase the traffic on the broadband portion of our networks adding a new dimension to the issue of Net Neutrality. Voice packets, unlike data packets, must arrive within milliseconds of each other at the recipient's end so the conversation can be put together and sound like voice, so voice services will have to have priority over data services. Yes, I know that voice packets don't take up much bandwidth, however, if you have a large number of voice users within a single cell sector, you may, in fact, have to throttle back data capacity and manage not only the data portion but all of the various methods of communications that share the same bandwidth. This will make network management more complex and more necessity.
Some seem to believe that we will have enough spectrum, or that we already have enough spectrum, so meeting the Net Neutrality rules won't be an issue. But just like wired access, wireless access must be managed, and it must be managed better. There are more choke points for wireless access to the Internet: the data capacity of a given cell sector, the backhaul from that cell site to the wireless operator's network, and the connection between the wireless network and the Internet. There is a lot that must be managed, and wireless network operators must be allowed to use different types of tools than their wired counterparts. If not, the Internet as we know it today will not survive into the future.
This brings me to "GoogleNet," the title of this Commentary. I have to admit that my mind sometimes takes a leap to another level, and in this case it certainly did. Google owns a LOT of dark fiber-fiber that is in place but unused, lying in the ground just waiting to be lit up. This fiber is not merely more on and off ramps, it is empty highways. A decade ago, there was an overabundance of fiber and Google started its buying spree. Today it does not take much to imagine a parallel network that could be assembled from this fiber and put into operation fairly quickly. Imagine a new Internet that is basically owned by Google and where Google makes the rules. We could choose to stay on the existing, overcrowded Internet, with its Net Neutrality rules hindering network management, or to pay a few extra dollars a month or put up with some advertisements to access GoogleNet and the world it chooses to make available to us.
If we no longer permit ISPs and wireless operators to manage the traffic on their systems, and allow anyone and everyone to do as they please, perhaps this scenario is not too farfetched. Perhaps this is even the real end game in the upcoming Net Neutrality fight.
In closing, I'd like to point out that if Net Neutrality, as defined by the FCC Chairman, requires networks to be completely open and accessible to everyone, a Police Department using broadband services on a commercial wireless network would not be entitled to priority access even during a time of a major incident. No wonder the Public Safety Community wants its own network!
Andrew M. Seybold