Spectrum Series Working Paper
By Kevin Werbach*
Almost everything you think you know about spectrum is wrong.
For nearly a century, radio frequency spectrum has been treated as a scarce resource that the government must parcel out through exclusive licenses. Spectrum licensing brought us radio, television, cellular telephones and vital public safety services. Along the way, the licensing model became an unquestioned paradigm, pervading our views. We simply can’t imagine doing anything else.
The assumptions underlying the dominant paradigm for spectrum management no longer hold. Today’s digital technologies are smart enough to distinguish between signals, allowing users to share the airwaves without exclusive licensing. Instead of treating spectrum as a scarce physical resource, we could make it available to all as a commons, an approach known as “open spectrum.” Open spectrum would allow for more efficient and creative use of the precious resource of the airwaves. It could enable innovative services, reduce prices, foster competition, create new business opportunities, and bring our communications policies in line with our democratic ideals.
Despite its radical implications, open spectrum can coexist with traditional exclusive licensing. There are two mechanisms to facilitate spectrum sharing: unlicensed parks and underlay. The first involves familiar allocated frequency bands, but with no user given the exclusive right to transmit. A very limited set of frequencies have already been designated for unlicensed consumer devices, such as cordless phones and wireless local area networks, but more is needed. The second approach allows unlicensed users to coexist in licensed bands, by making their signals invisible and non-intrusive to other users. Both open spectrum approaches have great value, with the specifics depending on how technology and markets develop. Both should be encouraged. The risks are minimal, while the potential benefits are extraordinary.
If the US Government wants to put in place the most pro-innovation, pro-investment, deregulatory, and democratic spectrum policy regime, it should do everything possible to promote open spectrum. Specifically, Congress and the FCC should take the following four steps:
• Develop rules to foster more effective cooperation among unlicensed users
• Set aside more spectrum for unlicensed uses
• Eliminate restrictions on non-intrusive underlay techniques across licensed bands
• Promote experimentation and research in unlicensed wireless technology
We can glimpse the possibilities of open spectrum in existing unlicensed bands. While most frequencies are licensed exclusively, a handful are open for anyone to transmit within technical parameters such as power limits.  The unlicensed bands are limited, congested, and devoid of any interference protection. Indeed, the most widely used, at 2.4 GHz, is so filled with devices such as microwave ovens, cordless telephones, and baby monitors that it is known as the “junk band.” Yet this is the site of the most explosive phenomenon in the wireless world: WiFi.
(IEEE 802.11) is a protocol for unlicensed wireless local area networks, allowing
high-speed data connections anywhere within a few hundred feed of an access
point. WiFi deployments are growing at fantastic
rates, doubling in the last year. A market that did not exist three years ago
now generates well over a billion dollars annually, continuing to expand despite
a severe technology recession. There are thousands of public access points in
WiFi shows only a fraction
of open spectrum’s potential. If the
Open spectrum is neither science fiction nor wishful thinking about human nature. Its ideas are rooted in well-established engineering techniques and mainstream economics, and its viability has been proven in mass-market implementations involving millions of users. It is time to question our long-held assumptions, and explore new policy approaches that could generate tremendous benefits for the American people.
For all its benefits, open spectrum represents a challenge to traditional ways of thinking, which is bound to provoke opposition. Incumbents comfortable with the status quo will seek to preserve the inflated values of their government-issued licenses. Licensed users express concerns that open spectrum techniques will generate interference that harms their businesses or important public services. Some economists who advocate turning spectrum into private property have been unwilling to see that open spectrum is even more market-based than the approach they favor.
These and other groups argue for actions that would hamstring open spectrum. They raise legitimate concerns, but their worries are unfounded. Spectrum policy is full of mistaken assumptions that have guided decision-making for too long.
Open spectrum explodes the following myths:
1) Wireless spectrum is scarce
Spectrum appears scarce only because our current regulatory regime puts draconian limitations on its use. If multiple users were allowed to dynamically share frequency bands, and to employ cooperative techniques to improve efficiency, spectrum could be as abundant as the air in the sky or the water in the ocean.
2) Auctions are the best mechanism to put spectrum into the marketplace
It has become conventional wisdom that auctions are the only efficient mechanisms for spectrum assignment, because they leverage market forces. Auctions have many benefits, but they force service providers to pay high up-front costs and grant what amounts to a monopoly over certain frequencies. Allowing companies to compete through innovation while sharing the spectrum as a common resource is in many cases a superior approach.
3) Massive capital investment is needed to exploit the spectrum
Licensed service providers such as cellular telephone operators and television broadcasters must build out expensive distribution networks before they can deliver services to customers. Often, they must also pay to obtain the spectrum itself in auctions. These huge capital expenditures must be recovered through service fees. In an unlicensed environment, by contrast, access to the airwaves is free and the most significant expense—the intelligent radios—are purchased directly by end-users.
4) The future of wireless lies in third-generation (3G) systems
3G represents a useful advance in cellular technology, but it is hardly a panacea. Spectrum and build-out costs for 3G will be enormous. Many of the wireless data services identified with 3G could be more efficiently delivered through short-range and meshed unlicensed technologies, with wide-area 3G service reserved for situations where those alternatives aren’t available.
5) Wireless technologies are not viable solutions to the last-mile bottleneck
The last mile does pose special challenges for wireless systems. However, these challenges may be overcome through unlicensed systems that use long-range communications, wideband underlay or meshed architectures. With cable and telephone wires into the home controlled by dominant incumbents, and enormous capital required to extend fiber to every home, open spectrum represents the best hope for a facilities-based broadband alternative.
Most wireless frequency bands are licensed, meaning that the government gives an entity such as a radio broadcaster or the military the exclusive right to transmit on those frequencies.  That license comes with restrictions on geography, power output, technical characteristics and/or service offerings. Transmission in the band by any other party is prohibited as “harmful interference.” This regime is considered necessary because the alternative would be a “tragedy of the commons”: a chaotic cacophony in which no one could communicate reliably.
The tragedy of the commons idea resonates with our intuitions. After all, too many sheep grazing in the same meadow will use up all the grass. Too many cars on a highway at the same time will cause traffic jams and collisions. Why should spectrum be any different?
Spectrum is different. It is different because it is inherently non-physical, and because technologies developed in recent years make it practical to avoid the tragedy of the commons. What these technologies have in common is that they allow more than one user to occupy the same range of frequencies at the same time, obviating the need for exclusive licensing. 
“Open spectrum” is an umbrella term for such approaches.  As used here, open spectrum includes established unlicensed wireless technologies such as WiFi. It would be a mistake, however, to conclude that the existence of WiFi proves no further action is needed to facilitate open spectrum. WiFi was designed for short-range data communication and the limitations of current spectrum rules. It therefore still requires wired “backhaul” connections to the public Internet. Moreover, current unlicensed bands and technical standards are not optimized for efficient spectrum sharing. Enlightened policies will allow the emergence of open spectrum systems that are self-contained, and can handle a range of services and environments. A true open spectrum environment would allow the same degree of openness, flexibility, and scalability for communication that the Internet itself promotes for applications and content.
There are two ways to implement open spectrum technologies. The first is to designate specific bands for unlicensed devices, with general rules to foster coexistence among users. This is the approach that allowed WiFi to flourish in the 2.4 GHz and 5 GHz bands. The second mechanism is to “underlay” unlicensed technologies in existing bands, without disturbing licensed uses. This approach, epitomized by the ultra-wideband technology the FCC authorized earlier this year, effectively manufactures new capacity by increasing spectrum efficiency. Underlay can be achieved either by using an extremely weak signal or by employing agile radios able to identify and move around competing transmissions. 
Both unlicensed bands and underlay have their place. Eventually, underlay approaches will be more significant, because they can work across the entire spectrum rather than requiring the creation of designated “parks.” Someday, if underlay is successful enough, we may not need licensed bands at all, but that day is well in the future. The important point today is to allow both unlicensed bands and underlay to develop based on technological capabilities and market demand. That involves four steps: removing limitations in existing rules, creating additional unlicensed bands, establishing rules to facilitate additional forms of underlay, and funding research into next-generation technologies.
Open spectrum is actually a simple concept. It requires no flights of fancy about the laws of physics. It sounds strange because, as the examples above suggest, we are accustomed to thinking of the radio spectrum as a scarce physical entity, like land. Charts showing the partitioning of the spectrum and auctions for geographically defined rights to slices of the airwaves reinforce the physicality of spectrum. We can’t see or touch the radio spectrum, so we envision it as something solid and familiar.
This is a mirage. There is no “aether” over which wireless signals travel; there are only the signals themselves, transmitters and receivers. What we call “spectrum” is simply a convenient way to describe the electromagnetic carrying capacity for the signals. Moreover, the spectrum isn’t nearly as congested as we imagine. Run a spectrum analyzer across the range of usable radio frequencies, and the vast majority of what you’ll here is silence. Even in bands licensed for popular applications such as cellular telephones and broadcast television, most frequencies are unused most of the time in any given location.  This is the case because historical spectrum allocations assume dumb devices that have a hard time distinguishing among signals, thus requiring wide bands with large separation.
With today’s technology, the better metaphor for wireless is not land, but oceans.  Boats traverse the seas. There is a risk those boats will collide with one another. The oceans, however, are huge relative to the volume of shipping traffic, and the pilots of each boat will maneuver to avoid any impending collision (i.e. ships “look and listen” before setting course). To ensure safe navigation, we have general rules defining shipping lanes, and a combination of laws and etiquette defining how boats should behave relative to one another. A regulatory regime that parceled out the oceans to different companies, so as to facilitate safe shipping, would be overkill.  It would sharply reduce the number of boats that could use the seas simultaneously, raising prices in the process.
The same is true with spectrum. Allowing users to share spectrum, subject to rules that ensure they do so efficiently, would be far more effective than turning more spectrum over to private owners.
If the idea that many users can coexist in the same spectrum sounds counter-intuitive, another analogy should help. Wireless communication in the radio-frequency spectrum is fundamentally similar to wireless communication in the acoustic spectrum, otherwise known as speech.
Imagine a group of people in a room. Experience tells us that everyone can carry on a conversation with his or her neighbor simultaneously, even with music playing the background, so long as people speak at a normal volume. If someone starts yelling, he or she will drown out other speakers, who will be forced to speak louder themselves in order to be heard. Eventually, some portion of the room simply won’t be able to communicate over the background noise, and each additional person who starts yelling will reduce the total number of conversations.
We could call that a “tragedy of the commons.” We could enact laws giving only some individuals the right to speak during defined times, ensuring they can shout as loud as they want without interference. But that would clearly be an unnecessary solution with significant negative consequences.
Think back to the situation where everyone is speaking in a normal tone of voice. What allows so many conversations to occur simultaneously is that the people talking are modulating their communications in an appropriate way, and the people listening are able to distinguish one conversation from another. It’s the intelligence at the ends of the conversation, not the integrity of the signal, that allows for more efficient communication. The same is true in the radio frequencies. Intelligent devices can distinguish among more simultaneous transmissions than simple ones. The more sophisticated and agile the system, the more the overall carrying capacity of the spectrum increases.
One might protest that the speaking analogy breaks down when people want to communicate across the room. Here too, there is no reason many conversations can’t occur simultaneously. By listening carefully, people can pick out individual speakers. Moreover, imagine that the people in the room could pass messages back and forth on pieces of paper. By cooperating to relay the communications, they would significantly increase the number of conversations, especially over long distances. Relaying and other cooperative techniques can serve the same function in the wireless world. 
As this analogy points out, the term “interference” is problematic. Radio waves at the relevant frequencies do not bounce off one another. They pass through each other cleanly, like the intersecting ripples from two stones thrown into a pond.  The overlapping signals simply make it harder for a receiver to distinguish one from another.
“Interference” is thus highly contingent on real-world factors. Again, this shouldn’t be surprising. Put two television sets next to one another, and you may get a sharp picture on one but a fuzzy image on the other. The difference is that one set has a better tuner. Do we register “interference” when it shows up on one set, or both? Should the most poorly designed set define the requirements for everyone else? What if there is no set in the room but a hypothetical set with certain characteristics might experience a degraded picture there? Under current spectrum policy, such hypothetical “interference” prevents frequency sharing.
Whatever rules we set will influence behavior. If “interference” is defined with reference to a dumb receiver, vendors will try to save money and make receivers as dumb as possible. If, on the other hand, manufacturers have no guarantee of spectrum exclusivity, they will have the opposite incentive. They will build devices robust enough to deal with a variety of situations, bounded by the overall technical rules for use of the spectrum band. Those building transmitters or delivering services over the spectrum will face similar incentives depending on the way the rules are defined. The point is not that the most unrestricted environment is always the best. It’s that our current system, without justification, assumes an exclusive licensing regime is the only viable answer.
There are three primary techniques for magnifying the efficiency of wireless devices in a shared environment: spread spectrum, cooperative networking and software-defined radio. These can all be used in licensed bands, though they reach their full potential in an unlicensed environment.
In a spread-spectrum system, wireless communications are digitized and chopped up into pieces, which are spread across a range of frequencies.  If the receiver knows where to look, it can piece the message back together on the other end. Spread spectrum means that an individual frequency only carries a small part of each communication, so it’s only occupied for a tiny slice of time. In the unlikely event that another message is occupying that slot, only that small portion of the signal must be re-sent. Spread spectrum was invented in the 1940s, and has been used extensively for military and other applications that require robustness and resistance to jamming or eavesdropping (because only the receiver knows how the signal is spread across the range of frequencies). Many mobile phone services use spread-spectrum today to improve efficiency within licensed bands, but the technique is even more powerful when used for underlay or in unlicensed bands.
The capacity of a wireless system is influenced by its architecture. A broadcast television network allows for many receivers but only one transmitter. By contrast, cellular telephone systems use a network of cells and towers to allow people to both call and receive calls. We may assume that the cellular hub-and-spoke architecture is the best we can do, but it isn’t. If the end-user devices cooperate with one another, such as by relaying signals of other users, the system can be more efficient.  With cooperation, adding users increases supply as well as demand. One form of cooperative network is the mesh architecture, where every transmitter also serves as a relay. A user need only be able to communicate with another user, rather than with a central tower, to send a signal anywhere on the network. Some cooperative networks can be deployed ad hoc, meaning that new nodes can be added anywhere and they will automatically become part of the network.
Every radio can be tuned to pick up a certain range of frequencies, and it takes some amount of time to change the tuning. Traditionally, these characteristics are fixed in the radio hardware. Thus, for example, the same radio can’t pick up both FM radio and mobile phone transmissions, or both 2.4 Ghz and 5 Ghz wireless LAN signals. Software-defined radios, by contrast, can tune dynamically over a wider range of frequencies. A software-defined radio can receive or transmit different kinds of wireless transmissions automatically. If it is a so-called “agile radio,” it could adapt to the local environment and seek out open frequencies to communicate. Even in licensed bands, most of the spectrum is empty most of the time. Agile radios could take advantage of that empty space, moving out of the way when another transmission appears.
All of these areas are the subject of extensive academic research and corporate R&D. Nonetheless, the licensed spectrum model has been the dominant paradigm for so long that there is a surprising amount we simply don’t know about how radios work. For example, we don’t know as a theoretical matter what the maximum capacity is of a geographically defined system filled with randomly distributed radios.
We do know that many of our intuitions are wrong. Research has shown that many factors we believe should decrease the capacity of a system—adding more transmitters, creating more alternative paths for signals to travel, or putting receivers in motion, for example—can actually increase capacity.  This occurs because the more data a smart receiver has about the surrounding environment, the better it can do in distinguishing the desired signal.
The commercial viability of any system using these techniques will depend on business conditions. That is one reason government policies should advance both designated unlicensed bands and underlay approaches that coexist in licensed bands. Under any scenario, though, open spectrum is not a fantasy, but a concept based on proven techniques. It is time for our policies to catch up with the state of technology.
To take advantage of the fantastic potential of open spectrum, we must change our spectrum policies. With a few exceptions, existing laws and regulations are rooted in historical anachronisms.
Since the passage of the Federal Radio Act in 1927 and the Communication Act in 1934, virtually everything about wireless has changed. What began as a technology for ship-to-shore communication became the foundation of the radio, broadcast television, satellite and cellular telephone industries, as well as supporting private radio services, public safety communications, military communications, wireless data networking and a host of other applications. The amount of spectrum considered usable has increased dramatically, as more sophisticated devices have been developed. Analog services are giving way to digital, allowing for additional features and efficiency.
Everything has changed except for one very important thing: We still regulate the radio spectrum based on the technology of the 1920s.
Spectrum licensing arose in the 1920s because of a historical phenomenon. Radio receivers of the period were primitive. They couldn’t distinguish well between different transmissions, so the only way for multiple users to share the spectrum was to divide it up. In 1912, nearby ships hadn’t responded to the Titanic’s distress calls, prompting calls for regulation. By licensing spectrum to broadcasters, with wide separation between bands, the government could ensure that receivers could identify which signal was which.
The exclusive licensing model was almost certainly the right approach when it was developed. It has been in place for so long, during which there has been so much commercial innovation in use of the wireless spectrum, that we take it for granted. When you think about it, though, our approach to spectrum is the exception rather than the rule. We shrug at intense government regulation of communications over the airwaves that would be unconstitutional in other media. After all, wireless communication is speech. Under the First Amendment, the government faces a high burden in justifying any law that defines who may communicate and who may not. Yet Congress and the FCC routinely determine who may broadcast on certain frequencies, and they regularly shut down those, such as “pirate” radio broadcasters, who fail to observe those rules.
The rationale for limiting speech over the airwaves is that there is no alternative. Spectrum is scarce, so the argument goes, so either some may speak or none will be able to get their message across amid the cacophony of interfering voices.  As discussed above, though, that scarcity is not an immutable property of a physical spectrum resource. It’s a historically and technologically contingent judgment.
Needless to say, much has changed since the 1920s. And indeed, there has been a major shift in the government’s approach to spectrum assignment. Auctions have replaced outright grants, competitive hearings, and lotteries as the tool of choice. Beginning with Ronald Coase’s seminal 1959 article, “The Federal Communications Commission,”  economists have argued persuasively that competitive bidding is the most efficient way to assign scarce licenses among competing users. Starting with the personal communications service (PCS) auctions of 1994, the FCC has raised over $30 billion for the US Treasury and delivered substantial new spectrum to the marketplace in this manner.
Most of the debates around spectrum policy today involve variations of the auction idea. Some parties advocate secondary markets  or moving from licenses to fee simple ownership,  while others propose combining auctions with annual lease fees after the initial license period.  These debates, intense though they may be, occur within the safe confines of the dominant exclusive licensing paradigm. If we have decided to license wireless frequencies, there are important questions about how best to do so. But why take licensing for granted?
Capacity-magnifying techniques such as spread spectrum, cooperative networking and software-defined radio make it possible to see spectrum as something other than a physical resource to be licensed. Portions of the radio spectrum could instead be treated as a commons.
A commons, like the air we breathe and the language
we speak, is a shared, renewable resource. It is open to all. It is not completely
free or inexhaustible, but it can seem that way if individuals follow rules
to prevent over-grazing. A commons is entirely compatible with competitive capitalism.
The key is that the marketplace occurs among users of the commons; the commons
itself cannot be bought or sold. We have no trouble accepting the automobile
and trucking industries, even though they depend on public roads and highways
that are free to use and maintained by the government. And we accept that even
though anyone can drive on the highway, everyone has to observe speed limits,
seatbelt laws, and other safety rules. Those public roads even coexist with
private toll roads, but we don’t think that privatizing all the roads would
improve the quality of transportation in the
A spectrum commons works just like the highways. Government defines the scope of the common resource and sets limited rules to facilitate efficient use. That means setting aside unlicensed frequencies, adopting rules to facilitate new “underlay commons,” setting power limits or other technical standards, and responding to any breakdowns.
The beauty of a spectrum commons is that is creates the right incentives. Exclusive licensing and propertization create spectrum monopolies, which seek to maximize the rents they can collect. Forcing licensees to buy spectrum at auction ensures it goes to those who value it highly, but it forces the winner to recoup its up-front investment, biasing the way it makes use of the spectrum. As noted above, exclusive licensing also encourages receiver manufacturers to make their devices as dumb as possible, while a spectrum commons has the opposite effect. In a commons environment, companies can respond to marketplace demands by tailoring new services, since the costs of entry are minimal.
Arguments about the benefits of open spectrum have in the past been largely theoretical. Techniques such as spread-spectrum were widely employed, but primarily in licensed bands or in military applications. Academic research showed the benefits of a spectrum commons. Without mass-market commercial examples, though, few were convinced the idea could fly in the real world.
Such real-world validation arrived in the form of WiFi and related wireless local area network (LAN) technologies. WiFi is a marketing and certification term promulgated by the Wireless Ethernet Compatibility Alliance, an industry trade group. It refers to the 802.11b and 802.11a wireless Ethernet standards defined by the Institute for Electrical and Electronic Engineers (IEEE).  802.11b, which was the first to take off commercially, operates in the 2.4 GHz Industrial, Scientific and Medical (ISM) band and delivers data speeds up to 11 megabits per second. 802.11a operates in the 5 GHz U-NII band and offers connections up to 54 megabits per second.  Standards work in this area is ongoing, with proposed standards including 802.11g, which delivers higher-speed connections in the 2.4 GHz band, and 802.11e, which adds quality-of-service mechanisms to support high-quality voice and video delivery.
The IEEE issued the final 802.11b standard in September 1999. The first mass-market commercial implementation, Apple’s AirPort technology, went on the market that year. Since then, the market has grown rapidly, with expected sales of some 10 million PC/laptop adapter cards this year. Vendors such as Cisco, Linksys, D-Link, Netgear and Proxim are doing a brisk business selling access points for home networks, adding value to residential broadband connections. On the enterprise side, wireless LAN deployments doubled last year, with more than one million access points now in use in 700,000 companies, according to the Yankee Group.  Cahners In-Stat sees the WiFi hardware market generating over $5 billion in 2005, and that doesn’t even include service revenues. 
Though WiFi was originally developed for corporate LANs, it has garnered attention for two applications: hotspots and community access points. Hotspots are wireless access points deployed in high-traffic locations such as hotels, airports, and cafes. Typically, the facilities owner contracts with a company that installs the necessary equipment and Internet connection, with the revenue split between them. Sometimes the service provider keeps all the revenue, with the facilities owner benefiting from additional traffic the access point generates. End-users usually pay a per-minute or monthly access fee to connect to the Internet through the hotspot.
Over 4,000 hotspots have been deployed
Community access points are similar to
hotspots, but they are made freely available to anyone in the area. Typically,
community access points are established by individuals or groups such as BAWUG
in the San Francisco Bay Area, NYCWireless in
Hotspots and community access points have generated significant media attention, and for good reason. However, they are only one element of a WiFi market that includes several other major applications.
Major corporations are deploying WiFi networks across their corporate campuses to provide ubiquitous connectivity for their employees; universities are doing the same for their students and faculty. Unlike the consumer access points, these deployments typically have beefed up security and reliability. WiFi is one of the few remaining growth areas in the depressed data networking sector, a fact not lost on vendors such as Cisco. Leading information technology services companies such as IBM have developed expertise integrating and installing these corporate networks.
Manufacturers such as Boeing are using WiFi to network their factories and warehouses. Such environments don’t lend themselves to wired connections. The ability to track inventory and access internal corporate documents from anywhere can generate substantial cost savings and efficiency benefits for these companies, who take advantage of the maturity and low cost of WiFi equipment thanks to its consumer applications.
Several companies are linking together the scattered hotspots through roaming arrangements, creating nationwide virtual networks. Examples include Boingo, Joltage, Wayport, and NetNearU. Some of these companies encourage individuals and small businesses to establish new access points, offering to share revenues from wireless access with them. A New York Times report earlier this year stated that several major technology and communications companies including Intel, Microsoft and Cingular Wireless were evaluating creating a nationwide WiFi roaming network, known as Project Rainbow. 
All this activity has taken place in an already-crowded unlicensed band, without any protection against interference from other users. WiFi is an existence proof for the validity of the open spectrum argument.
WiFi is not alone. Several other unlicensed wireless data technologies are either commercially available or nearly so. Each has technical characteristics that lend themselves to particular market opportunities, though there are many areas of potential overlap. The beauty of an unlicensed environment is that hardware vendors and service providers need not go through a gatekeeper such as a cellular carrier to gain access to spectrum. If the technology works, and there is a market for it, the equipment can simply be deployed.
WiFi is a particular protocol designed for local-area network applications. Several companies are trying to marry the cost economies of standards-based 2.4 GHz radios with proprietary software and hardware to support additional capabilities. There are also competing standards to WiFi, including HomeRF and the European HyperLAN2, but these have generally lost out to WiFi in market adoption and will likely fade away.
As described above, ultra-wideband (UWB) systems use such low power that they can underlay beneath existing licensed spectrum bands. Because of the power limitations, current UWB implementations have limited range, but they offer significant capacity. Vendors such as Time Domain and XTreme Spectrum are building chipsets to deliver 100 Mbps or more over short distances. After a long and bitter fight, the FCC authorized UWB underlay for the first time in February.  The FCC’s initial rules put strict limits on UWB systems, but the Commission committed to reviewing and potentially loosening the restrictions if interference fears do not materialize.
Bluetooth and Other Personal Area Network Technologies
WiFi works in a local area, but many wireless applications need only a range of a few feet. Personal-area networking (PAN) encompasses situations such as transmitting between a mobile phone and a headset, sending data between a phone and a personal digital assistant, and printing from a laptop to a printer in the same room. For these scenarios, WiFi may be overkill in terms of power requirements and chip costs. Bluetooth is an unlicensed ad hoc 2.4 GHz standard for such applications. It has been slow to roll out, leading to speculation it would lose to WiFi. However, it now looks as though Bluetooth will find a niche, primarily replacing wires in short-range situations.
Unlicensed Metropolitan-Area Networking
At the other extreme from PANs are metropolitan-area networks (MANs), which cover entire neighborhoods or cities, though often designed primarily for business connections. The IEEE is developing standards, 802.16, for wireless MANs using the 10 GHz - 66 GHz spectrum. Though these are initially targeted at licensed spectrum, the same concepts could be applied in an unlicensed environment. Motorola offers a proprietary unlicensed system called Canopy that is designed for the metropolitan area. Canopy, or some variant of it, might become the basis for the 802.16 standard in unlicensed bands.
The success of WiFi shows that spectrum sharing works in the real world. Without heavy-handed control by government or by service providers who have incentives to maximize only their own welfare, an entire industry has emerged. That industry has developed with no legal protections against competing uses. Despite repeated warnings of a “meltdown,” only isolated anecdotal cases of congestion among WiFi users have been reported. Companies such as Intel and Microsoft are devoting substantial resources to these technologies, which they would be unlikely to do if they were seriously concerned about a tragedy of the commons.
Moreover, wireless LAN
technology is evolving and diversifying rapidly. Vendors are beginning to deliver
hybrid 802.11a/b chipsets, and devices that add software intelligence to WiFi
are coming on the market. Innovation in the WiFi world follows the computer industry curve of
Limitations in WiFi devices are being addressed through market forces. For example, first-generation WiFi equipment has a relatively weak built-in security mechanism known as Wireless Equivalent Privacy (WEP). For users concerned about security, such as enterprises, third parties and hardware vendors quickly developed supplemental security solutions that integrated with standards-based WiFi deployments. Meanwhile, an enhanced security standard, 802.1x, was recently ratified by the IEEE.
A key to the success of WiFi is that it uses a different business model than traditional telecommunications and broadband services. Because the network grows incrementally with every new access point and every device capable of receiving WiFi signals, there is no need for incentives to convince a monopoly service provider to build out expensive infrastructure. No one needs to predict what the killer applications of the technology will be, because users will find them on their own. With a licensed service, the network operator must invest in delivering services in the hope that customers will pay enough to recoup that investment. With WiFi, services grow bottom-up through market forces.
There are many things WiFi cannot do. For example, as a short-range LAN technology,
it can’t provide universal coverage over a large area, and it isn’t designed
for mobile scenarios such as connecting from a car. For these applications,
WiFi gracefully coexists with licensed services. Vendors
such as Nokia are building equipment that supports both WiFi and licensed wide-area cellular services, allowing users
to switch automatically to the best network for their current needs. Licensed
mobile operators are beginning to enter the WiFi hotspot
Unlicensed technologies could play an important role in residential broadband adoption. Today, incumbent cable and local telephone companies dominate the residential broadband market with cable modem and digital subscriber line (DSL) services. These companies control the two primary wires into homes. To deploy high-speed services, they must upgrade their networks, which requires significant investment. Most incumbent service providers now charge $45 to $50 per month for broadband connections. They frequently place significant limitations on the services, including highly asymmetric bandwidth, prohibitions on home servers, prohibitions on virtual private network connections, and limits on streaming video usage.
The operators claim these restrictions are necessary for their broadband offerings to be economically viable, even though equivalent services in other countries are priced significantly lower. Though subscribership is increasing and new technologies are reducing the costs of broadband infrastructure, many companies have actually increased their prices during the past year, as competition dried up.
The fundamental problems in the residential broadband market are the same as in wireless. Service providers must build expensive networks and define the services for which they think users will pay, then charge high rates to recover their costs. Most cable modem and DSL providers market their services as providing faster Web surfing than dial-up access. Many end-users simply don’t find this compelling, especially at $50 per month. Unlike the open WiFi market, there is no room for innovators to roll out new service offerings or better technology, because everything must go through the network owner.
As noted above, standard WiFi technology provides only short-range connections, to a range of approximately 300 feet. This is insufficient for most residential broadband deployments. To deliver broadband to a home, the home must connect to a high-speed Internet trunk, which can be shared among many customers. Having a fast WiFi connection in a house doesn’t substitute for DSL or a cable modem, because the wired connection is still necessary to reach the public Internet.
Despite these limitations, there are several approaches that could allow unlicensed wireless devices to deliver last-mile broadband service. Companies such as Nokia (with its Rooftop system), MeshNetworks and SkyPilot have created systems that use a meshed architecture. In other words, rather than connecting to a central hub, each device can send information to every other device it can see. Information can be routed through the network using many different paths, depending on capacity, line of sight, and other characteristics. The mesh approach gets around limitations that hobbled previous fixed-wireless systems in the last mile. Other companies such as Etherlinx and Motorola have created proprietary technologies on top of WiFi radios to allow significantly increased range in traditional point-to-multipoint deployments. Motorola claims its Canopy technology can serve up to 1,200 subscribers from a single access point at a range of up to two miles, operating in the unlicensed 5 GHz band. Unlicensed wireless connections could also serve as “tails” at the end of existing phone, cable, or fiber infrastructure in residential neighborhoods.
All these configurations have their limitations. As with any wireless service, connection quality depends on physical geography and the local spectral environment. As a result, it’s unlikely unlicensed technologies would represent the majority of broadband connections in the near future. Even if they take a small share of the market, however, wireless last-mile systems would foster significant competition and innovation. Wired broadband providers would have to improve their offerings or lower costs to compete.
Unlicensed wireless technologies will impact the broadband market even if they aren’t used as the primary connection method. Large numbers of WiFi access points are being deployed for home networking. Users install these devices in their homes to share broadband connections among several computers, share peripherals such as printers, or give themselves untethered Internet access anywhere in their house. Digital consumer electronics devices are beginning to incorporate WiFi connections as well. For example, Moxi Digital, which recently merged with Digeo, a company funded by Microsoft co-founder Paul Allen, incorporates an 802.11a transmitter in its personal media center. This allows the Moxi box to stream high-quality audio video among TVs and stereos throughout a house. Intel is spearheading a standards effort to allow WiFi to interoperate with FireWire (IEEE 1394), a wired standard popular for digital media applications.
As home networks and related devices proliferate, they will create a “pull” for broadband applications. WiFi hardware is becoming sufficiently cheap for hardware manufacturers to include it without significant effects on device prices. As users buy laptops and consumer electronics hardware with high-speed wireless connections built in, they will find new uses for it, such as sending music files from their computer to their stereo system, or sharing pictures downloaded from the Web. Many of these applications will benefit from broadband connections into the home. Wireless devices will therefore stimulate broadband demand even when the last-mile connection is wired.
Open spectrum is not inevitable.
Technologies now available or under development will lay the groundwork for
a radically more open and more efficient wireless environment, but without the
right policy framework, those technologies may never see the light of day. WiFi,
exciting though it may be, cannot simply evolve into the full realization of
open spectrum. If the
Despite the promise of open spectrum, there are many threats to the continued growth of unlicensed wireless. Open spectrum profoundly threatens the status quo. It represents a new form of potential competition for existing wireless services, and for wired services as well. Moreover, it runs counter to conventional assumptions about which policies are truly market-based. Absent a clear understanding of open spectrum’s implications, policymakers may take actions that would prevent it from reaching its potential. The FCC and Congress must ensure that the following threats from incumbent industries do not undermine the future potential of unlicensed technologies:
Requests for Regulatory Protection
Sirius Satellite Radio filed a petition with the FCC earlier this year seeking restrictions on WiFi based on trumped-up concerns about interference with its adjacent licensed satellite transmissions. Though Sirius hadn’t even launched its service and the potential for interference was minimal, it wanted significant limitations and new device requirements placed on the thriving WiFi industry. The Sirius petition was withdrawn after it provoked serious objections. Nonetheless, it gives a sense of how licensed users could seek to hamstring unlicensed alternatives. Wireless operators facing new competition from unlicensed devices may similarly rely on scare tactics and legal maneuvers to prevent unlicensed services from encroaching on their markets.
If the FCC were to give spectrum licensees full ownership rights, it would significantly decrease the likelihood that spectrum would be available for unlicensed uses. Companies that pay for control over frequencies will want to recoup their investments, which means excluding competing users. Even if “band managers” could operate toll-gated frequencies for unlicensed use, the transaction costs involved would be substantial. Worst of all, propertization is a one-way street. Once spectrum becomes private property, converting some of it to unlicensed “parks” or even eliminating restrictions on band sharing could require costly eminent domain proceedings. Giving spectrum licensees greater flexibility or opportunities to engage in secondary market transactions may make sense, but the step from there to further propertization would have significant negative consequences.
Unlicensed wireless data devices must at some point connect into the public Internet. For traditional point-to-multipoint systems such as WiFi, an access point serves as the local hub and connects to a wired data connection such as a T-1 line to deliver traffic to the Internet. Bringing data from a local point of presence to a central aggregation point is known as “backhaul.” It typically involves facilities of incumbent local exchange carriers, because their networks span virtually every city. Because of the lack of competition, backhaul is expensive. Moreover, if telephone companies see unlicensed wireless devices as competitive, they may seek to prevent them from connecting into their networks. An advantage of meshed networks and systems that combine short-range unlicensed “tails” with long-range unlicensed “backbones” is that they cut down on the need for wired backhaul connections. Until such alternatives are widely available, the government should reject rule changes that would make it easier for telephone companies to discriminate in provision of wireless backhaul, and should police against anti-competitive behavior.
At the same time, policymakers should take affirmative steps to facilitate open spectrum. Most current unlicensed wireless services, including WiFi, operate in the 2.4 GHz and 5 GHz unlicensed bands. These bands are relatively narrow, at high frequencies that limit their propagation, and subject to many established competing uses. Though unlicensed devices can coexist in seemingly crowded spectrum, their ability to do so is not absolute. Moreover, WiFi’s software protocols don’t have the adaptive and cooperative characteristics of truly scalable unlicensed networks. Current FCC rules have done a reasonable job of setting conditions that allow for innovation and market growth, but more is needed.
• Develop rules to foster more effective cooperation among unlicensed users
• Set aside more spectrum for unlicensed uses
• Eliminate restrictions on non-intrusive underlay techniques across licensed bands
• Promote experimentation and research in unlicensed wireless technology
All of the elements are important. WiFi, other unlicensed technologies in designated bands, and
underlay are all part of the answer. Furthermore, the mix will change over time.
Existing unlicensed bands are delivering value today. However, newer approaches
designed from the ground up for open spectrum will be the long-term winners.
The only way to allow market forces to determine the best solutions is to give
alternative approaches a chance. By announcing its intention to move forward
with a comprehensive open spectrum agenda, the
The first step is to enhance existing unlicensed bands, which were not designed with open spectrum in mind. The FCC should work with the private sector and the technical community to identify minimal requirements to facilitate efficient spectrum sharing. In the near term, this could include service rules for the 5 GHz band to allow for continued growth of wireless data networking applications. These should not pre-determine technology or applications, but could include general requirements such as mandating that devices be capable of two-way packet-switched communications. The FCC should also identify restrictions in its existing rules, such as outmoded prohibitions on repeaters, that could be removed to allow for greater spectrum sharing.
In the future, as it establishes new unlicensed bands and eliminates underlay restrictions, the FCC could define additional “rules of the road,” either as requirements or as advisory “best practices.” For example, companies could be encouraged to build devices that modulate their output based on actual conditions, or that repeat traffic for other users, allowing for meshed architectures.
Whatever rules are adopted should be developed in consultation with industry representatives and technical experts to ensure they do not over- or under-specify standards. Reasonable accommodations should be made for uses of spectrum other than data networking, including scientific activity such as radio astronomy. Different rules may apply to particular bands or techniques. Whatever decisions are made will need to be reviewed periodically as conditions evolve.
Improving existing unlicensed bands isn’t enough. Most are so narrow and congested that their utility for open spectrum is limited. Furthermore, the high frequency of the most prominent unlicensed bands limits signal propagation. Lower-frequency spectrum that penetrates weather, tree cover, and walls would provide significant advantages for services such as last-mile broadband connectivity.
The FCC should identify
additional spectrum bands that can be designated for use as unlicensed “parks,”
with a particular focus on frequencies below 2 GHz where propagation is best.
The FCC will need to consult with other relevant agencies such as the Department
of Defense, Federal Aviation Administration, and Department of Commerce; technical
and scientific organizations such as the National Academy of Sciences and Institute
of Electrical and Electronics Engineers; and the private sector. Furthermore,
There are many possible
sources for additional unlicensed spectrum. The 5 GHz unlicensed band could
be expanded relatively easily, a move that would also help bring the
The FCC took a major step forward with its February approval of ultra-wideband. The Commission wisely rejected overblown fears about interference, relying on technical data and prudent restrictions on UWB deployment. However, the Commission’s initial rules still put unnecessarily severe limits on where and how UWB can be used. Assuming that experience shows the fears about interference ungrounded, the FCC should loosen its restrictions without delay.
The FCC should look at other ways to facilitate underlay of unlicensed communications in existing spectrum bands. Underlay can be achieved either through weak signals or adaptive, agile receivers. As technology advances, the FCC could consider a rule allowing underlay in certain bands, so long as devices check the local environment before transmitting and vacate a frequency within a certain number of milliseconds if a licensed service appears there. Underlay could also be used as a transition mechanism in bands where there are limited numbers of incumbents. Those incumbents could be allowed to remain in the band, but without the current guarantees against interference.
The government should seek out additional mechanisms to encourage the development and deployment of unlicensed devices. These could include liberalizing rules for experimental licenses, funding research projects, and using government procurement power to drive adoption of WiFi or other technologies. The Defense Advanced Research Projects Agency (DARPA) of the Department of Defense has a distinguished history of supporting cutting-edge research in data networking, including the packet-switching technology that led to the Internet. DARPA has long funded research into meshed wireless networking, ultra-wideband, and software-defined radio, because of their military applications. These efforts should be continued, and every effort made to ensure smooth transfer of the resulting technologies to civilian applications.
The FCC and other relevant agencies should review their rules to identify unnecessary restrictions that keep unlicensed devices out of existing programs. For example, the FCC doesn’t allow the use of Schools and Libraries subsidies for unlicensed networking devices, because they do not involve a communications “service.” Of course, government should not try to pick winners among competitors in the marketplace, but rather work in tandem with the private sector to ensure innovative technologies can reach their potential.
Alongside these steps, the FCC and Congress should continue their broader efforts to foster investment and competition in communications. Open spectrum will flourish in a growing market.
The forthcoming return of analog television spectrum provides an opportunity to put some of these policies into practice. Congress has directed the FCC to auction the 700 MHz spectrum now occupied by broadcast channels 60-69, though the auction has been delayed several times. Because of its propagation characteristics, the 700 MHz spectrum could make an excellent unlicensed wireless park, a scenario that simply could not be contemplated when the original plans for return of that spectrum were drawn up. Congress should take advantage of the opportunity and designate some or all of the 700 MHz spectrum for unlicensed devices. As a transitional mechanism, the FCC could allow only underlay uses that do not intrude on incumbent licensees.
We are living under a faulty set of assumptions about spectrum. Licensing may have been the only viable approach in the 1920s, but it certainly isn’t in the first years of the 21st century. We take it for granted that companies must pay for exclusive rights to spectrum, and that once they do, they must invest in significant infrastructure buildout to deliver services. We also take for granted a pervasive level of regulation on how spectrum is used, which would be intolerable for any other medium so connected to speech. We assume that market forces, if introduced into the wireless world at all, must be applied to choices among monopolists rather than free competition. We make these assumptions because we can’t imagine the world being otherwise.
Open spectrum technologies forces us to rethink all of our assumptions about wireless communication. By making more efficient use of the spectrum we have, it can effectively remove the capacity constraints that limit current wireless voice and data services. By opening up space for innovation, it could lead to the development of new applications and services. It could provide an alternative pipe into the home for broadband connectivity. And it could allow many more speakers access to the public resource of the airwaves.
Today, we stand at a crucial point. Our policies could fritter away open spectrum’s historic opportunity, either through inaction or harmful limits on new technologies. Or we could listen to what the market and technology are telling us. Computers have made wireless devices vastly smarter than they were in the past. It’s time for our policies to become smarter as well. Promoting open spectrum is the most democratic, deregulatory, pro-investment, and innovation-friendly move the US Government could make.
* Kevin Werbach is a technology consultant, author, and founder of the Supernova Group. He has served as the Editor of Release 1.0: Esther Dyson’s Monthly Report, and as Counsel for New Technology Policy at the Federal Communications Commission.
 The most prominent unlicensed bands are in the 900 MHz range, the 2.4 GHz range and the 5 GHz range.
 Some bands are licensed for shared use, meaning that more than one entity is permitted to transmit. In such cases, though, other users are still prohibited from using the spectrum.
 See, e.g., Kevin Werbach, “Open Spectrum: The Paradise of the Commons,” Release 1.0, November 2001, available from the author or at http://www.release1-0.com; Yochai Benkler, “Open Spectrum Policy: Building the Commons in Physical Infrastructure,” presentation at the New America Foundation conference “Saving the Information Commons,” May 10, 2002, available at http://www.newamerica.net/Download_Docs/pdfs/Doc_File_122_1.pdf; Yochai Benkler, “Overcoming Agoraphobia: Building the Commons of the Digitally Networked Environment,” 11 Harvard Journal of Law and Technology 287 (1998), available at http://www.law.nyu.edu/benklery/agoraphobia.pdf.
 The phrase is far from perfect. “Open” is a notoriously vague term in the computer world, and unlicensed wireless systems don’t necessarily have the same characteristics as open source software or open technical standards. The word “spectrum” evokes the physical image of discrete frequency bands that open spectrum seeks to overcome. Nevertheless, open spectrum is most widely used term for what we are describing, so we employ it here.
 Underlay is similar in some ways to low-power radio stations that can broadcast without harming high-power commercial stations in the same band. However, because it uses intelligent devices and modulation techniques, open spectrum underlay is even less likely to have any noticeable effect on licensed users.
 Among the 68 channels reserved nationwide for broadcast television, an average of only 13 channels per market are actually in use. The remaining “white space” is set aside to ensure antiquated analog receivers can distinguish among the channels.
 Futurist George Gilder made a similar analogy
in his pioneering series of “telecosm” articles
in the early 1990s. See George Gilder, “Auctioning the Airways,” Forbes
 The key is the relative absence of scarcity. Some ocean resources, such as certain fish stocks, were once abundant but have become scarce due to overfishing. In those cases, economic and regulatory mechanisms to allocate the scarce resource make sense. With spectrum, we are far away from this point of scarcity.
 Technologist David Reed calls this “cooperation
gain.” See David P. Reed, comments in FCC Docket No. ET 02-135 (
 I first heard this analogy used by David Reed.
 This paper uses the term “spread spectrum” in a general sense. An even broader term is “wideband,” which covers any method that employs a wide channel
 See Reed, supra note 9; Tim Shepard, “Decentralized Channel Management in Scalable Multihop Spread-Spectrum Packet Radio Networks,” MIT dissertation (1995), available at ftp://ftp.lcs.mit.edu/pub/lcs-pubs/tr.outbox/MIT-LCS-TR-670.ps.gz (describing one scalable architecture).
 See Reed, supra note 9, at 13-18.
 See Red Lion Broadcasting
Co., Inc. v. Federal Communications Commission, 395
 Ronald Coase, “The Federal Communications Commission,” 2 Journal of Law and Economics 1, 17-35 (1959).
 See Comments of 37 Concerned Economists,
FCC Docket No. WT 02-230,
 See Thomas Hazlett, “The Wireless Craze, The Unlimited Bandwidth Myth, The Spectrum Auction Faux Pas, and the Punchline to Ronald Coase’s ‘Big Joke’: An Essay on Airwave Allocation Policy,” AEI-Brookings Joint Center for Regulatory Studies Working Paper 01-02, January 2001; Arthur De Vany, “A Property System for Market Allocation of the Electromagnetic Spectrum: A Legal-Economic-Engineering Study,” 21 Stanford Law Review 1499 (1969); see also Jora Minasian, “Property Rights in Radiation: An Alternative Approach to Radio Frequency Allocation” 18 Journal of Law and Economics 221 (1975).
 See Reply Comments of New
 WECA originally used WiFi to designate 802.11b, and WiFi5 for the higher-speed 802.11a standard. To avoid customer confusion, the group has changed its policy and will now issue the WiFi compatibility mark to devices using both standards.
 These are theoretical peak speeds. Under real-world conditions, 802.11b delivers several megabits per second and 802.11a delivers several tens of megabits. By comparison, wireline broadband connections over cable or copper phone lines (DSL) typically deliver data speeds of 1 megabit per second or less.
 See Stephen Lawson, “Wireless LAN
Use Growing Fast,” InfoWorld,
 See “Wi-Fi:
It’s Fast, It’s Here -- and It Works,” BusinessWeek,
Expands Wi-Fi Access in Starbucks,” Wall Street
 John Markoff,
“Talks Weigh Big Project on Wireless Internet Link,” New York Times,
 See Revision of Part 15 of the Commission’s
Rules Regarding Ultra-Wideband Transmission Systems, ET Docket No. 98-153,
First Report and Order, FCC 02-48,