Evolution from 10G to 100G for a Metro Network

Evolution from 10G to 100G for a Metro Network

Introduction


With data traffic volume increasing by around 40% each year, the prevailing 10 Gb/s optical networks are quickly becoming saturated. As the long-haul technology research tries to avert the ‘capacity crunch’ with novel fibers, advanced optical components and sophisticated digital signal processing, the change in technology for metro networks, enterprise and datacenters has to be smooth yet swift. For the roadmap to 100G metro networks, economic viability is of paramount importance together with greater space, power and bandwidth efficiency. It is essential that the upgrade takes advantage of the current infrastructure with minimal disruption to existing services and is inherently flexible to further accommodate newer equipment as per demand.

Overlaying flourishing 10 Gb/s services with additional co-propagating 10 Gb/s channels in different colors, or wavelengths, is already common practice. Network operators eager to increase capacity have begun overlaying 40 Gb/s channels onto available fiber without leasing more dark fibers as the best pragmatic approach. With the advent of 100G technology, the interest is now shifting from 40G to 100G installations. This paper provides a primer in 100G technology developments and examines important parameters and prerequisites supporting practicable 100G metro network solutions. An example of complementing the existing 10 Gb/s services with new 100 Gb/s WDM channels is also discussed.

Direct or Coherent Detection?


The transmit signal strength is limited by laser heat dissipation and power consumption. Consequently, significant research and development efforts have been undertaken to improve the sensitivity of the receiver. Two technology alternatives exist: direct detection or coherent detection.

Predominantly suited to trans-oceanic submarine or terrestrial long haul applications, the performance of the coherent detection is undeniably superior to that of direct detection. In the case of coherent detection, the opto-electric conversion process is linear. Thus the phase information embedded in the optical signal is preserved permitting the straight forward electrical compensation of fiber linear effects including chromatic dispersion (CD) and polarization mode dispersion (PMD). However, the hardware required to perform coherent detection is somewhat more elaborate and comprises a local oscillator, a 90° Hybrid module necessary to discriminate the phase quadrature’s of the received optical signal, and four balanced photodiodes to detect the signal from a single polarization as seen in Fig. 2.

A direct detection technique, on the other hand, requires only a delay interferometer and two single photodiode translating into a device of much lower cost and complexity. The advantage of longer transmission distances (in the range of several 1000 km) enabled using a coherent approach is overkill for deployments of much shorter reaches - ranges of several 100 km.
The direct detection technique is thus seen as an attractive alternative for metro networks, enterprise and datacenters where economic viability plays a vital role.
Taking the minimalist approach, direct detection works without additional equipment in most cases. When the application range approaches the limit of direct detection technology, the simple addition of dispersion compensation and amplification may be used to augment signal detection and extend reach.

Complementing the existing 10 Gb/s system with 100 Gb/s upgrades


The phenomenal growth in new applications such as cloud services, telemedicine, video on demand etc., has ushered in the unprecedented outburst in data rate migration encompassing the entire optical network. Core networks have deployed DWDM to quench this bandwidth explosion. The application of DWDM in metro is not deployed in exactly the same way as in core networks since with metro distances impairments tend to be much less severe. DWDM-based architecture cost-effectively fulfills the bandwidth boost requirements with the flexibility to continuously scale according to the evolving market needs. Thus the question for service providers is not if they employ DWDM metro networks but how and when.  
A typical metro network scenario portrays an existing infrastructure often relying on multiple 10 Gb/s or 40 Gb/s services multiplexed onto a single fiber pair. Complementing these existing services with higher data rate services requires careful network planning. The optimal network solution is constructed using the building blocks presented in this paper and the architecture depends on the particular geographic location, traffic and expected network growth.

For example consider a legacy system with fully functioning DWDM 10G services in a 100 GHz frequency grid. The 10G services from the line card are aggregated with the aid of a 100GHz DWDM multiplexer as shown in Fig. 7 (upper part).
To maximize the bandwidth utilization and prevent higher cost, one may opt to add one or more 100G services into the same fiber via the same DWDM multiplexer by deploying the new service via the unused wavelengths. However, due to the distance limitation of currently available 100G transceivers, to enhance the transmission reach to several 100km, the optimal network architecture would be as depicted in Fig. 7 (lower part).

Since the 100G services are more susceptible to dispersion, they would require extra dispersion compensation and optical power boost. Thus an extra 100GHz DWDM multiplexer is first used to combine all the 100G services together followed by a combined dispersion compensation and amplification stage. The grouped 10G and 100G services can then be bundled together with the help of a 50/100 GHz interleaver. Such an architecture makes bridging distances greater than 100 km possible given that the existing 10G channels already supported that distance. Nonetheless, the exact distance depends on the amplifier gain, amount of dispersion compensated and the transceiver performance. To further guarantee a decent transmission distance for this system, FEC can be enabled in the 100G switches (transponders) or transceivers.  

Additionally, this architecture conveniently supports the ‘pay-as-you-grow’ model for service providers. If and when the bandwidth is exhausted, the existing legacy 10G channels may be seamlessly interchanged with 100G services. The same remaining components can even be reused to extend the data rate up to 2.4 Tb/s.
This scenario would require 24 differently colored 100 Gb/s DWDM CFP transceivers deployed together with the already existing 48 channel 100 GHz DWDM and interleaver as shown Fig. 8. All the 100G services are first multiplexed together such that only one dispersion compensation and amplification stage suffices. Clearly, such a network architecture provides higher density with capability to salvage existing infrastructure with flexibility while remaining cost friendly.

Conclusion


The rapidly increasing traffic demand will soon make the operators face the age old problem: lease more dark fibers or find a way to increase capacity of the existing fibers. With the growing developments in the 100G transceiver technology and the rise in interest for installing 100G products, the metamorphosis of current network architecture to 100G systems is inevitable and only a matter of time. 
Share by: