Industry View: Rod Alferness

Believe it or not, but at Bell Labs, that byword for cutting-edge research, a few people work hard at squeezing more data through copper wires. Copper. The stuff that used to carry Morse code and telephone conversations along the sea bed before glass took over. Copper, the metal that still carries the Internet for that last mile into most houses and offices – but not for very long, according to most pundits.

That would include Rod Alferness, the chief scientist at Bell Labs in Murray Hill, New Jersey. Until four years ago Alferness was senior VP for optical research at this research arm of Alcatel-Lucent. In his present position, he is responsible for its long-term strategy. And there, too, optical technology looms large.

“I’ve been in optical research going back to 1976. I gave up my personal research lab in 2001, when I became chief technology officer (CTO) of the optical business of Lucent, Bell Labs' parent company. And up to that year, in the work that I had led, optical really drove things. We’ve always known: fiber will be the ultimate transmission medium. So one of the things I had to do as a CTO was to convince the optical-business people that the future was in optical networks.”

Dispersion

Around that time an important problem with the transmission of information on a light beam had been solved: dispersion. This effect makes a short pulse of energy spread out as it travels through an optical fiber. Because of that, pulses – initially the way data was coded in order to be sent over a fiber – have to be kept well apart from each other when they begin their journey. In other words, dispersion limits data throughput. However, by a combination of low-dispersion fibers and devices that modify the light as it passes through them in such a way as to counteract the spreading, dispersion could be held at bay.

“Then came the optical fiber amplifier”, Alferness recalls. With that all-optical device, light could arrive by fiber, be amplified, and leave again by fiber, without having to be converted into electrical current and back. This made it possible to code information on each channel of multiple, different wavelengths, increasing the total information capacity of the fiber. Wavelength division multiplexing, or WDM, was born.

“We recognized that this would allow you to network full wavelengths and also have a way of switching information based on those wavelengths,” Alferness says. This was the next step in optical networking – semi-static, point-to-point switching. “In those days, switching was all electronic. We had a vision of optical networking, of doing some of the switching in the optical domain, on the basis of wavelength. You would have a connection of 10 Gbps, some of it would go to San Francisco, another part to Chicago. You would build an optical switch that would have the signal for Chicago drop off at some point. And it wouldn’t even be permanent; such a switch could be reconfigured – they're called reconfigurable optical add/drop multiplexers (ROADMs) – but that would take time, say 10 milliseconds.”

Vision

“A number of companies were trying to do that. Then the Internet bubble burst, and those activities stopped. It was the right vision, but it would come a few years later. Now we have metropolitan networks that are wavelength-routed – built by Alcatel-Lucent, for example, for Verizon. When I left the business unit and went back to research as senior VP for optical research, in 2004, it was a year or two too soon...”

Since then, the growth of data volume has picked up again, and optical networks have taken full advantage of a very fortuitous property of fiber networks with optical amplifiers: their capacity is mostly dependent on how fast one can get the data to go in and out at the end point.

“The beauty is, if you put new multiplexers in, for 40 Gbps, the amplifiers can still handle it, and so can the wavelength switches, the ROADMs. You simply need to upgrade the transmitters and receivers. So in 2005 data growth was really happening, transport was selling, and maximum realizable bit rates were about 40 Gbps per wavelength channel. We started doing research on 100 Gbps per channel.”

As data volumes grow, more and more domains are becoming suitable for fiber technology: metropolitan networks and even the last leg of any data connection, to a data center in a building, or to a small computer network in a home. But each time fiber penetrates a new level of the network hierarchy, the complexity of the combined glass network increases.

“In new constructions, builders are often putting fiber in now. These are typically 25 megabits per second (Mbps) connections, in Japan they’re even doing 100 Mbps. You can deploy multiple fibers from the network's local ‘central office’, one to each home. But that’s not very cost-efficient: in that central office you’d have to have one box at the end of each fiber. So usually it’s a ‘passive optical network’: you send the signal to a splitter and have each house have a weaker version of the full signal. All houses get all of the data, and there are protocols that make sure a node can only access the data destined for it.”

Switching the Data Stream

For anyone used to viewing the Internet as a maze where data packets are constantly bounced around from one switch to the next, each one recalculating the optimal route to the destination, all this has a slightly straightjacketed feel to it. Any optical switches involved can be reconfigured only slowly, if at all. And at many nodes the optical signals must be temporarily converted to electrical and back, to read the address and correctly route the data packets. How long will we have to wait for flexible, fast, all-optical switching and routing?

“That is indeed the next step: switching a WDM packet stream. You can do it, but then you have to put something active in the box.”

One idea under consideration for this purpose is a variation on the demultiplexers that are already used to statically switch light beams with different colors into different fibers. It is called an arrayed waveguide grating coupler. Its central component is a grating, which has roughly the same effect as a prism: it changes the direction of a light beam, dependent on its wavelength. “So, say light at one wavelength comes out at port 1, and light at another wavelength comes out at port 2. If I now have a device that can change the color of the light, I can decide which port it will go to.”

Changing the color of the light may be done at the source, by a tunable laser, when the data are still in the form of an electrical signal. Electronics would read the address and make sure the light into which the packets are coded has the correct wavelength.

Much more useful would be a device that changes the color of the light as it travels through an optical network. This can be achieved through a wavelength converter, a device into which two light beams are introduced, one with the data and another not carrying any information, but with the desired new frequency. The two beams interact in such a way that the “clean” light beam is modulated with the information from the original carrier. “If I can change the color in about a nanosecond, I can use that to switch a stream of packets of, say, 100 nanoseconds' duration.”

Such switching times are indeed possible. An important achievement was the building of converters and grating couplers out of indium phosphide. “That means we can combine a high-speed converter and a grating coupler on one chip.”

The fact that indium phosphide is a semiconductor with properties that make it suitable for both electronics and photonics is crucial here, for the wavelength converter will still have to be instructed as to which wavelength to convert the signal into by data in electric form. “We will have to read the address from the header and process it by electronics at first. That means we have to put the address information on the light beam at a much lower bit rate than the data itself. But that's not such a problem. The valued property of this kind of switch is that you keep the data all-optical.”

The Next Decade

The next ten years will see progressive application of optical switching, Alferness believes. “We expect it in the backbone, possibly in metro networks, where data rates of up to 100 Gbps will have to be switched. A great deal of work about that is being done, including at Bell Labs, some of it funded by Darpa. An absolute all-optical approach for the whole connection, from transmitter, via routers, to receiver, is probably not realistic – not in the next ten years.”

The reason for this is that data rates into individual homes or businesses won't require the high transmission rates where all-optical switching becomes necessary. On the other hand, Alferness does think that, in the next decade, the increase in capacity of fiber cables will not be able to keep up with the increase in demand. “In the past ten years, with WDM and the fiber amplifier, we have achieved a factor of 100 in capacity increase for the long haul. We could do that very cost-effectively, by increasing the number of wavelength channels in the fiber. In the next ten years, we anticipate demand will go up by another factor of 100. For present cables to keep up, we are looking at data rates beyond 100 Gbps, with high probability a terabyte per second. We will be severely challenged to deliver that.”

To increase bandwidth, the number of wavelength channels could conceivably be increased further. New coding algorithms, some of them developed for wireless Internet connections, could help to increase the capacity of each wavelength channel. But room for improvement in several areas does not add up to a jump of two orders of magnitude. “In the end, if you need more capacity, you can lay more cable,” says Alferness. “But that kind of improvement is much more expensive than the kind we've had up till now.”

Video Traffic

Whether a vast increase in the number of cables becomes necessary will also depend on the nature of the demand for data transport. “Much of the traffic right now is not generated in real time. It’s video traffic. Maybe you can store lots of information at nodes. Friends in the memory business tell me that, according to the current paradigm, in the next year the amount of video stored will increase by a factor of 100.”

“By caching data and scheduling the transport, the peak traffic you would have to design your network for would not grow as fast. But even that is not certain. It is also possible that the more caching people decide to do, the more demands are placed on the network.”

Contact data

Rod C. Alferness was chief scientist at Bell Labs, Murray Hill, New Jersey. Now, he is the Richard A. Hull Professor and Dean of the College of Engineering at University of California, Santa Barbara.
Website: http://engineering.ucsb.edu
Office Location: 1030 HFH
Office of the Dean, College of Engineering
1030 Harold Frank Hall
University of California
Santa Barbara, CA 93106-5130