Network congestion, the state where nodes receive more data than they can handle, leads to packet losses, increased network delay and reduced throughput for all data ﬂows passing through a congested node. Despite four decades of research on congestion-control algorithms or schemes, there is yet no “one-size ﬁts-all" solution. There are at least 30 diﬀerent congestion-control algorithms as of today, and new ones are still being proposed every so often. Determining how any one scheme falls short in comparison to the rest, and, most importantly, along what dimensions, is quite diﬃcult to answer. Even performing all the pairwise comparisons between the schemes is hard because each algorithm behaves diﬀerently depending on the underlying network environment. Additionally, determining under what regime (i.e., combination of network latency, buﬀer sizes, bandwidth, etc.) a given scheme
fares better or worse compared to another has proven to be challenging.
To address the above, we have proposed a congestion-control benchmark platform. In a benchmarking test, network conditions get worse if the protocol performs well, and better if it does not. In this way, the benchmark platform is able to identify conditions where a protocol performs optimally; and by creating situations where a protocol does worse, helps show why this is the case. The work in the paper "Pantheon: the training ground for Internet congestion-control, published at NSDI ’15, took the initial steps towards formally addressing the lack of a platform for evaluating congestion-control algorithms. The Pantheon allows congestion-control algorithms to be tested in a reproducible manner and serves as ‘training ground’ for newer algorithms. There are, however, many open research
challenges that still need to be addressed.
The ﬁrst goal of our project is to extend the Pantheon to test congestion-control schemes along several dimensions. To address some of the missing functionalities with the original Pantheon codebase, we have implemented several features including the ability to emulate per-ﬂow delay, the ability to run large batch experiments and the ability to run byte size limited ﬂows. The second major goal of the project is determining how to eﬃciently explore the high-dimensional experiment-conﬁguration space. When evaluating TCP mechanisms like congestion control, it is important to use real-world traﬃc characteristics to ensure that the performance of the congestion-control algorithm in the emulated environment is as close as possible to what its performance will be in the real Internet. As an initial step towards this goal, we have analyzed passive, anonymized Internet traﬃc traces captured on a backbone link of a Tier1 ISP between New York, USA and Sao Paolo, Brazil. The analyzed statistics were such as ﬂow inter-arrival time and ﬂow size distribution as seen in the figure below.
Performing this analysis does not only give us a realistic composition of evaluation parameters for our benchmark, we also conﬁrmed the ‘elephant’ and ‘mice’ ﬂow attributes of Internet traﬃc. Mice ﬂows are ﬂows that transfer a small total number of bytes (i.e less than 10 KB) and are short-lived. Elephant ﬂows on the other hand are large, long-lived
ﬂows. From the analysis, we saw that while mice ﬂows dominate in terms of number of ﬂows, elephant ﬂows dominate by volume of data transferred. This means that while elephant ﬂows are congestion-controlled, mice ﬂows are not, and may therefore ﬁll up network buﬀers on-path, negatively impacting elephant ﬂows. Towards meeting the objective of fairly sharing on-path network resources between mice and elephant ﬂows, current work involves using the extended platform to evaluate how various congestion-control algorithms handle mice and elephant ﬂows under diﬀerent network conditions. We plan to extend these experiments to lab-based hardware tests as well as in the
wild (i.e in the public Internet). Our ﬁndings should help identify which congestion-control algorithm better handles interactions between mice and elephant ﬂows. Additional questions that we hope to answer as part of this ongoing project include:
1. Can we use online learning to sample the high-dimensional experiment-conﬁguration space and what learning algorithm is appropriate for sampling such a parameter space?
2. In what situations are current congestion-control algorithms fair to one another and when should we expect fairness?
3. What performance metrics should we consider when evaluating congestion control?