Each year, the bandwidth transported by telecommunications networks increases by approximately 30%. To keep pace, the interconnects on which these networks are built will need to become much smarter and more capable before long, BT’s Andrew Lord said during his keynote on Hot Interconnects earlier this week.
Lord, senior optical research executive at the British telecommunications giant, describes fiber as “a massive 21st century project for the planet in the same way as 100 years ago copper was a massive project for networks. railways, water, gas and electricity networks.
But as fiber becomes more and more ubiquitous, branching out to homes and businesses, it’s becoming a bigger headache for telecom operators as they grapple with growing demands for bandwidth year after year.
“Our capacity is growing at 30% per year and we’ve seen that over the last 20 years,” Lord said. “I see no reason why it should slow down, and I see many reasons why it could speed up.”
That hasn’t been such a big deal, at least for the past two decades, thanks to technologies such as wave division multiplexing, which have allowed optical engineers to group together more wavelengths – colors of light – in a single pair of fibers. The problem is that we are approaching the limits of this technology as it is deployed today, Lord explained.
An intelligent interconnection palliative
According to Lord, network operators can circumvent this limitation by taking advantage of the fact that interconnects have become much smarter in recent generations.
“A transceiver, or a laser, or a plug-in card, or a line card is capable of so much more than it was,” he said. “It’s capable of pushing its limits. You can run it at 400Gbps or 600 or 500 and you can flex it.”
The way modern optical networks work today is that much of the fiber capacity is dedicated to headroom.
You can think of this kind of highway flanked by deep ravines on each side. Stay in your lane and you’ll reach your destination, but even swerve a little and plummet to your death. Obviously, this is not ideal as it leaves no room for error. So the highway has shoulders along its sides to provide some leeway, and the wider the shoulder, the greater the margin of safety.
The same kind of logic applies to how data is transmitted over fiber optics. Service providers make an informed estimate of the impact of environmental factors and equipment age on connection reliability over time and incorporate a generous safety margin.
It’s actually one of the ways optics vendors can improve their performance numbers in testing by effectively reducing spectrum margin to a degree you’ll never find in a production network.
By coupling smart, coherent optical transceivers with AI/ML algorithms, Lord says, service providers can begin to dynamically scale headroom to open up more bandwidth or compensate for growing errors.
“We can really pull back on that margin because we get real-time information about our fiber loss, fiber performance, dispersion, nonlinearity, or transceiver parameters,” he said. declared.
For example, a brand new network could be operated with a small margin to achieve the highest capacity, but as equipment ages or environmental factors change, the margin may be increased to account for variances and losses. additional.
Going back to the freeway analogy, it would be a bit like opening the shoulder to rush hour traffic as long as the weather conditions are clear and it’s unlikely to cause accidents.
“How much ability potential is unlocked? I think it’s vast,” he said. “I think many, many links in our networks have several dBs of spectral headroom that translate into doubling and quadrupling capability.”
However, all of this monitoring requires a full ecosystem of smart interconnect technologies that can power AI models and provide operators with actionable insights.
Data is what you make of it
Until recently, Lord says, the problem wasn’t whether you could glean information about your transceivers, but rather what you were doing with all that data.
“In the past, we had optical networks that generated large amounts of data about their performance, which we just threw away simply because there was nowhere to store it,” he said. “That’s changing because of AI.”
However, Lord doesn’t expect service providers to get to grips with AI-controlled networks anytime soon. “It would be a very brave operator to hand over all the reliability of network governance and management to an AI machine,” he said.
Instead, he sees an opportunity to use digital twins to simulate the network in real time and experiment with new configurations in a safe way before implementing them in production.
This is not an easy task, he notes. “A lot of the issues you’re having are with imperfect bends and imperfect installation, so there’s a lot of things we need to worry about that aren’t at the application layer,” and need to be taken into consideration. account when constructing a digital twin.
However, once in place, this data could be used in conjunction with pattern-matching algorithms to glean insights into more than network performance and reliability.
While shrinking spectrum headroom may buy network operators time, at some point capacity demands will again reach a tipping point, Lord explained.
A relatively simple option would be to increase the number of fiber pairs used on each span.
“Maybe I just put in a lot of fiber. Honestly, that’s probably a very viable solution. Fiber is cost effective. You can install massive ducts filled with 1,000 fibers or more,” Lord said.
The caveat, of course, is that more fiber requires more transceivers, bigger and more power-hungry equipment, and people to manage it all.
The capacity you can cram into a single fiber is highly dependent on how far the data needs to travel. “If you want to travel a short distance, across your entire fiber spectrum, you might be able to get half a petabit. If you want to travel a long distance, it will be much less than that,” he said. he explained, adding that at a 30% year-over-year increase in traffic, those numbers will start to look really small very quickly.
An alternative is to cram more spectrum bands into the fiber itself. This, says Lord, will greatly increase the capacity of a single pair of fibers at the expense of greater complexity.
“What does this mean for interconnects? Well, all of a sudden you have fiber coming in with five times more wavelengths than before,” he said. “It’s something the interconnect community needs to understand.”
PON on steroids
While this may solve some of the complexity of the core network, Lord says there are still opportunities for innovation when it comes to getting fiber to the growing number of homes and businesses.
One promising technology is transceivers like Infinera’s XR Optics, which passively split a 400 Gbps optical signal into multiple smaller optical data streams.
“It’s kind of like a super PON [passive optical network] on steroids,” Lord said in reference to a common optical technology used in consumer fiber deployments.
However, the technology is not without its challenges, he notes. One of the most important is that it blurs the separation of the physical layer and the IP layer into one.
According to Lord, this will require appliances to start handling a lot of the physical layer processing to ensure traffic from an endpoint is routed accordingly. The benefit, however, is a substantial reduction in power consumption.
The more you can integrate and co-pack these things, put your optics right next to your electronics, the more energy you’ll save.
Ultimately, Lord paints a picture in which innovations in interconnect technology will be critical to the long-term success of the telecommunications industry. ®