Whats so special about 512K?


In around 1990 Internet Engineering Task Force (IETF) was alerted to a looming problem: long before the Internet was a commercial reality it looked like we would hit two really solid walls if we wanted to make the Internet scale to a global communications system.

The first problem was that the Internet Protocol’s 32 bit binary address was just too small. It was looking likely that we were going to run out of addresses in the mid ’90s.

The second problem was that the Internet’s routing system was growing at an uncontrolled rate, and instead of using re-purposed low cost IBM XT machinery as routers, as was the case up until then, we’d soon need to use phenomenally expensive high end super computers because the routing system was growing faster than Moore’s Law and we were running out of cheap routing grunt.

We “solved” the address crunch by accident. The mainstream effort was to design a sucessor protocol to IPv4 that had really really big address fields. We did that by 1995, and its called IPv6. However, as well as IPv6, out of that effort came a stop gap solution in the form of low impact semi-transparent address sharing: Network Address Translation. These days IPv6 is still “some time away” as we still seek only some 3% of the Internet’s user base able to use IPv6, and the take up is still quite patchy. However, in the meantime NATs were adopted at a surprising rate and today NATs drive the Internet. While hard numbers are hard to come by we suspect that almost the entirety of the Internet’s client population are located behind at least one level of NATs, and many live behind two or more NATs. IPv6 is now 20 years old and still not here, and we have run out of IPv4 addresses. This was the envisaged “Internet disaster” scenario, but NATs have kept the Internet working despite that.

What about that routing problem? In 1993 we introduced another stop gap measure, similar to NATs. We introduced so-called “classless” routing protocols. This held back the onslaught of routing explosion for a couple of years. Then we changed the way addresses were allocated. The introduction of the regional Internet registry framework achieved one major outcome, which is little talked about but perhaps more effective than any other measure we took. The RIR system turned addresses from completely free resources that were allocated on a cemetery plot basis (once and forever) into a leased resource, where continued rights to use an address were associated with an annual payment. I would argue that this monetization of addresses pushed the evolving Internet industry from address profligacy to address conservatism. The changes were pervasive. Home consumers were given one IP address, and the customer’s CPE equipment included built-in NATs to allow their home network to grow to arbitrary sizes without using more of those expensive public addresses. Interestingly enough, the monetization level per IP address was not exactly high in terms of financial cost, but the results were significant. We shifted over to a model of address allocation that supported routing aggregation by network providers, and over the ensuring years the growth trajectory in the routing system was significantly lower, in relative terms, than that of the early 1990′s.

While the Internet’s growth has continued, the routing system and the addressing system have not grown at the same rate, due largely, in the case of routing, to the widespread deployment of NATs and provider-based addressing, and the ubiquitous use of classless routing protocols.

This story is visible in a plot of the past 20 years of routing table growth. If you look closely in this time series you can see the Internet boom and bust in the late 90′s and the onslaught of the mobile Internet in the past 10 years of growth.

bgp table plot

BGP Routing Table Size 1988 – 2014

However, today’s question is: Is routing growing faster than Moore’s law?”

If the answer is “yes” then routing gets more expensive. If “no” then the unit costs of routing will fall.

My presentation at NANOG 60 at the start of this year looked at the routing system and tried to forecast the next five years of routing table growth. Nothing in these projection numbers gives any cause for concern. The growth rate of IPv4 Internet is linear, at an approximate rate of 40,000 – 50,000 additional entires per year. This is a growth rate that is well below Moore’s Law. Yes, we cross threshold points from time to time, but we cross them at a leisurely pace. Recently, the IPv4 Internet passed 512K routing entries, and I’ve seen a few comments on this in the press about how the Internet collapsed, fell down or had a bad hair day or similar variants, but they seem to be overblown. In fact this 512K size is not a “real” threshold.

Let me explain…

The conventional way to get a router operating at a reasonably high packet throughput rate is to use very high speed TCAM banks (so-called “ternary content addressable memory”) on the line cards. The destination address field of the incoming packet is presented to the TCAM unit and a very short time later out comes what is, in effect, an outbound interface queue identifier where a router needs to send the packet onward. This is then used as input to the switching engine to take the packet and switch it over to the outbound interface queue.

TCAM is expensive, and high speed TCAM is (evidently) extremely expensive. Router vendors don’t over-provision more than “necessary” as this makes their products more expensive or reduces their margins on the product. They tend to build equipment with around five to ten years of anticipated growth.

Many vendors look at conservative projections of routing table size and use these as part of the design parameters for their equipment. It should not be surprising to see currently shipping routing equipment equipped with linecards with 2M of TCAM slots, some 4 times the current size of the Internet’s IPv4 routing table.

Of course its not just IPv4 – there’s IPv6 as well, but while the growth factors are higher in relative terms, the absolute numbers are still far smaller, so the projected numbers for Ipv6 routing table size are not as large as the IPv4 numbers. However, IPv6 addresses are 128 bits, as compared to IPv4′s 32 bits, so they conventionally an IPv6 prefix in TCAM will require the equivalent of 4 IPv4 TCAM slots per routing entry.

Yes, you could expand TCAM sizes to, say, 20M or more, but this will add to the router’s power and heat budget as well as the capital cost budget, and I suspect this would make its set of line cards look a lot like power-hungry, overheating, expensive bloatware. But if you wait for five to ten years, Moore’s Law will come to your assistance and you will be able to do 20M TCAMs for the power and price of, say, 2M TCAMs today. Most vendors provision their TCAMS in the linecards to ride around five to ten years ahead of the curve. If you have ten year old equipment you may well need a TCAM transplant to cope with today’s FIB size

Yes, you could do FIB compression to reduce the TCAM demands – but so far the cost of TCAM has been low enough that the added computational overhead to perform FIB compression is basically a dud proposition. But I suppose that its reassuring, to some extent, to believe that if things get really, really tough then FIB compression could be used!

What does all this mean? Well it means that older routing equipment in the Internet will, in time, fall over – inevitably.

Nothing in the Internet’s routing table gives me cause for concern, and absolutely nothing I see is telling me to hit the alarm button – there really is is not much to see here. And while that is not exactly a newsworthy story, for a few tens of thousands of network operators out there, and for the vendors of equipment who service these network operators, the fact that BGP growth is actually steady and highly predictable is actually a really, really good story!