NZNOG 2015: Virtual routers have a niche, but it IS a niche

Tim Nagy from Juniper presented on the virtual router model. This is when you take commodity hardware (PCs) and run a virtualized router image on it, to provide you with the same kind of functionality a real dedicated hardware router would do, but hopefully in a more scaleable manner for some people.

Tim made a good case that this is a viable use of the technology but you have to be very conscious of the niches it fits.

If you have more than 100Gb of capacity to forward, the hardware to do this in a PC case becomes quite expensive, as do the licences of the virtual router products
Most of these products can only do ‘wire speed’ forwarding if you exclude complicated requirements. Multicast, V/LAN, DPI, Link aggregation, all would make them run considerably under the line speed.
You need to think about Total Cost of Ownership (TCO) all the time in the ISP business, and there are certainly cases where the TCO of a virtual router are lower, but this is about your cost of operations in the wide, not the hardware platform you chose to run on.
Tim sees a good fit for people with a scenario like 4 10G connects to two 100G upstreams, or who want to run a lab configuration and test things. He also observed that the efficiencies in CPU clock speed have basically levelled out since 2004: we have significant improvements in network speeds, but we now have to scale multiple cores to get more CPU ‘grunt’ to do things because clocks peed hasn’t continued to improve. This is a corollary of Moore’s law, that we are getting denser on-chip technology, but not necessarily faster clock speeds. Also, the improvements in coding and technology which were making exponential improvements in forwarding data structures have levelled out: we only now expect to get incremental improvements in the data model. This really means that we’re looking at bigger memory footprint, more cpu clustered, distributed solutions to the problems, not faster single-CPU boxes.

A rather scary problem is ‘Average Revenue Per User’ or ARPU. The ARPU has been flat for customers for some time now, if not actually decreasing. But, bandwidth expectations have continued to grow continuously and show no sign of flattening out. Its a very odd digital economy when you can’t get more revenue per customer, but you have to give them ‘more’ of the basic service! -Welcome to the strange economics of the Internet.

Tim ended on a note which might be useful for anyone with a large field deployment of customer premises equipment (CPE) running much the same systems. This model can be used to manage an entire customer base via ‘dumb’ edge connects using a ‘smart’ virtualized core. There is a fashion in these things, wheeling between smart edge and dumb edge, but one thing which could be better in this model, is the problem of DDoS: normally, when you try and mitigate a DDoS attack, you are the wrong side of the narrow link being congested: your service line. So, moving the smart rules into the core, would move the ‘defense’ against DDoS into the core and actually help protect the customer link.

Martin Levy, (Cloudflare, ex Hurricane) asked a very pointed question about the longterm effects on router vendors share price if ‘big iron’ is going out of the picture and commodity h/w is the answer. TIm said (and its only his personal opinion so don’t sell Juniper stock on it!) that its not a path of free choice: the changes in technology which are informing the space are outside Junipers control, and it has to reflect the realities of what customers want, and what technology is capable of offering just like any other vendor. If its true that we’re moving from a dedicated hardware box to a commodity hardware+licenced software solution, thats going to be an enormous revolution in the industry.