Wednesday, January 19, 2011

Thoughts on RON: Resilient Overlay Networks

Authors: David G. Andersen, Hari Balakrishnan, M. Frans Kaashoek, Robert Morris

Venue: Proc. 18th ACM SOSP, Banff, Canada, October 2001


The paper describes the design and implementation of an overlay network that can be used to subvert the underlying default IP routing. A RON is a set of nodes that cooperate to select the best overlay path to route traffic over given an application's requirements. The application links to the RON library and uses the library's functions to send and receive traffic. Each RON node monitors the connection quality to every other node in the network, and uses that information to best route traffic.

There is no authentication in RON, and all nodes have to implicitly trust each other. However, RON does provide the ability for node providers to specify complex policies on what traffic to accept (constrained by the lack of authentication). But without authentication, it would be difficult to bill any particular entity for traffic, an important aspect given that RON nodes need to be quite powerful.

Apparently, a route diversion of only one hop has been found to achieve quite a significant boost in performance and to solve most of the problems, and it was found that routing over RON has enabled connectivity recovery in less than 20s, much faster than BGP reconvergence.

RON does not scale well, and so RONs need to be limited in size to about 50. The scalability bottleneck is due to the fact that each node does quite a bit of monitoring on various paths and maintains a large database. However, there have been follow ups to the work that try to improve on the scalability.

Tuesday, January 18, 2011

Thoughts on Overlay Networks and the Future of the Internet

Authors: Dave Clark, Bill Lehr, Steve Bauer, Peyman Faratin, Rahul Sami, John Wroclawski

Venue: Communications and Strategies Journal, no. 63, 3rd quarter 2006, p1


The paper provides a good overview on overlays and attempts to provide a formal definition and taxonomy.

An overlay is a set of servers deployed across the Internet that:

    1.  Provide infrastructure to one or more applications,
    2. Take responsibility for the forwarding and handling of application data in ways that are different from or in competition with what is part of the basic Internet,
    3. Can be operated in an organized and coherent way by third parties (which may include collections of end-users).
  1. peer-to-peer e.g. Napster and Gnutella
  2. CDN e.g. Akamai
  3. Routing e.g. RON
  4. Security e.g. VPNs, Tor, Entropy
  5. Experimental e.g. PlanetLab, I3
  6. Other e.g. email, Skype, MBone
The authors assert that overlays do not follow the end-to-end principle because even though from the IP layer's point of view, overlay servers are simply end-nodes, from the application's point of view, they are considered infrastructure.

The paper discusses policy issues and the relationship between industry structure and overlays, asking several thought-provoking questions. It then goes into depth discussing the implications of CDN overlays, security overlays, and routing overlays.

One passage I really enjoyed was the description of why BGP is insufficient:
... Broadly speaking, BGP allows each ISP to express its policies for accepting, forwarding, and passing off packets using a variety of control knobs. BGP then performs a distributed computation to determine the "best" path along which packets from each source to each destination should be forwarded. 
This formulation raises two difficulties, one fundamental and one pragmatic. The first of these is that the notion of "best" is in fact insufficient to fully express the routing task. "Best" is a single dimensional concept, but routing is a multi-dimensional problem. Individual ISPs, in making their routing decisions, may choose to optimize a wide variety of properties. Among these might be 1) the cost of passing on a packet; 2) the distribution of traffic among different physical links within their infrastructure to maximize utilization and minimize congestion -  so-called traffic engineering; and 3) performance in some dimension, such as bandwidth available to the traffic or transmission delay across the ISP. Furthermore, because the management of each ISP chooses its own objectives, different ISPs may choose to optimize different quantities, leading to an overall path that captures no simple notion of "best", and rarely if ever is best for the user. 
A second, pragmatic problem with the current internet routing infrastructure is that it has evolved over time from one in which simple technical objectives dominated to one in which ISPs often wish to express complex policy requirements. For this reason the knobs - the methods available within BGP to control routing choices - have also evolved over time, and are presently somewhat haphazard and baroque. This compounds the fundamental problem by making it harder for ISPs to express precisely the policies they desire, even after those policies are known.
The paper overall is an easy, entertaining read and gives a nice overview of the issues surrounding overlays and their use and deployment in the Internet.