Nordunet 2012 – Conference Report

Geoff Huston

I had the honour to be invited to present at the 2012 Nordunet Conference in Olso in September. This post contains my personal impressions from the conference.

In many ways the national research network agenda has not changed all that much over the years. There is still the same degree of post-Internet uncertainty and a certain amount of ” what are we doing?” being asked. The originals rationale, namely of providing services to the national academic and research enterprise of a type and on a scale that was not available from conventional industry telecommunication providers is not exactly a clearly sustainable proposition these days. In terms of commodity IP services the commercial supplier market, with its range of fixed and mobile services and its scale of operation, is now easily capable of meeting the research sector’s conventional service needs. So where now is the unique role for the NRENs?

The presentation that, for me, was the most telling in this area was by the CEO of NorduNet, who described his view of the coming 10 years, where he confidently expected the regional research backbone network to increasingly operate with conventional industry based inputs and operate like many other service providers, albeit with a highly specialized clientele. But the expectation was that over time this was becoming a conventional business operation with customers paying for services.

The network speed records still exist, and within this is the effort to deploy faster circuits. A 100Gbps production circuit between Oslo and Trondheim was reported to the Nordunet Conference as the first in the Nordic countries, and possibly the first such high speed production circuit in Europe.

One of the prominent topics at NorduNet that I picked up was Big Science, in the form of large scale instruments, and large scale data sets, particularly in the physics and astrophysics areas, and some of the biogenetic work.

These days we’ve shifted attention from the LHC to the SKA project, which appears to involve data generators, in the form of radio telescopes, located in Western Australia and South Africa, with data processing in Europe and presumably North America. It appears that the SKA is indicative of a class of data networking that is quite unique to the research networks, namely time-critical loss tolerant data streams. The general form of this model is a number of constant rate data generators, where there is a very low signal to noise ratio within the data. The objective is to bring these data streams to a single processing point to allow for combined processing that will pull a single signal from the accumulated data stream. The processing is performed in real time, so the purpose of the network is is to take the data stream from the instrument and pass it to the collector. The data feeds are constant bit rate at the level of gigabits per second, and the network needs to preserve a constant delay for the data path. This has lead to a significant body of work in light path networking, where the role of the network is to maintain a dedicated circuit with constant capacity and constant delay. This is implemented as a provisioned wavelength on an optical bearer system. There is no concept of sharing or multiplexing, and the data protocol used, UDP, is used as a simple and widely implemented data framing protocol.

The next is the fascination with the latest in networking, and at this point in time the new is SDN and Openflow. The best description I have heard about these technologies is from the US GENI project, which uses these technologies to create experimental sandboxes to expedite research into networking. In this light these technologies were not viewed as an end in and of themselves, but a way to research various networking scenarios. However, as the GENI program gathered momentum and partners it appears that, to some extent, the experimental platform is changing into the production platform. A presentation from Internet2 described how their network is now an Openflow SDN platform. There is all the hallmarks of asserting that this networking architecture, if this form of meta-toolset ithat s used to create specific networking topologies could be termed a network architecture in its own right is now the new architecture of the network itself, as distinct from the structure for experimentation about networks. Some folk see in this model the ability to deliver high bandwidth virtual circuits on demand and this is fine, but one wonders if this is a robust general purpose architecture, or whether the set of interdependencies is too fragile. We will see.

This was a relatively conventional conference about topics of interest to the research community in the area of networking. The slow evolution of this conversation within the research community is one that has evolved from a conversation about how to provide cost effective networking services to various research programs and applications, to one of the application of networking technologies to the research endeavor. There was a number of presentations on high availability cloud services, from the perspectives of service delivery and security, collaborative technologies and video infrastructure, all of which are standard fare for the NREN communities these days.

Given my own interests in the topic at present, I was pleased to see a couple of IPv6 presentations – one from myself on dual stack quality, and one from Ole Troan showcasing a new IPv6 resource page (http://6lab.cisco.com) Interestingly there is a “Users” report that compares the APNIC stats with the Google stsats, and the APNIC results are looking VERY good!

There were a couple of DNS presentations, both related to DNSSEC. One was a lightning talk from myself on measuring DNSSEC, and one from Roland van Rijswijk of Surfnet on DNSSEC and the problem of expanding DNS messages and UDP behaviours.

The full Nordunet program is at: https://events.nordu.net/display/ndn2012web/Programme.