Resource Reservation Considered Harmful

The main perceived advantage of the connection-oriented networks is their ability to provide a guaranteed level of service by reserving some bandwidth for particular virtual circuits. Obviously, that function is impossible in the connectionless networks (RSVP essentially converts an IP network from connectionless into a hybrid network, thus introducing all problems inherent to connection-oriented networks).

Resource reservation does not add any more bandwidth, it is essentially a prioritization scheme. As such, it only trades one quality for another; in this case the ability to achieve lossless transmission is traded for the risk of being denied the service when no sufficient resources are available, i.e. if a network is overloaded by 30%, there will be 30% packet loss or 30% chance of refusal of service, accordingly. In fact, no realistic traffic stream has fixed bandwidth (save for legacy isochronous streams generated by POTS). However, to guarantee zero packet loss a reservation must be made for the entire requested capacity. That means that the probability of refusal of service by a VC-based network will be much higher than the probability of losing a packet in an equally loaded connectionless network.

Another factor to consider is the ability of a network to handle severe overloading. A resource-reservation network can only refuse service. A best-effort network would start by degrading the quality of service until it becomes unacceptable. In other words, those networks have different modes of degradation - one-step denial of service vs gradual degradation; and it is an engineering rule of thumb that gracefully-degrading systems need far fewer spare resources to accommodate normal usage peaks. A resource-reservation network is also less fair, in the sense that it penalizes short-living connections while favoring longer-living connections that are already established; while best-effort network will degrade service for all connections equally.

As the reader can see, there are very good reasons to avoid resource reservation altogether and instead develop transport protocols able to sustain packet loss and cooperate in congestion control. At least one such protocol for asynchronous reliable delivery (TCP) is widely used and is shown to be very stable even in massively overloaded networks, so the resource reservation development is concentrated on real-time audio and video delivery protocols.

In our opinion, such development is seriously misguided because both audio and video transmissions may be significantly reduced in quality in case of congestion, without losing utility. For audio, a rate reduction could be achieved by means of reducing sampling rate and/or reducing sampling frequency. Video signals have even better flexibility, as video quality can be reduced by lowering image resolution, frame frequency, color depth, or number of levels of brightness. A suitable transport protocol would feed requested source rate information back to the transmitting application, so as to perform TCP-style cooperative congestion control.

There are several techniques to accommodate packet loss in real-time transmission protocols. One such technique is to "stripe" samples between packets, so the contents of a missing packet could be approximated by an interpolation. Another technique is to have critical information sent in higher-priority packets; such critical information can include low-resolution frames in video, or low-frequency samples in audio. Such high-priority sub-streams constitute only a small fraction of total bandwidth, but would preserve the integrity of A/V output in case of serious congestion.

A combination of adaptive rate transmission and loss-resistant A/V transport obliviates any need for resource reservation for realtime streams.

The remaining argument pro resource reservation is that certain customers would be willing to pay more for a guaranteed level of service. As we have seen, resource reservation per se does not guarantee a better level of service - in fact (unlike best-effort delivery) it does not guarantee that the service would be available at all. The guarantee of a certain quality of service rests entirely with the service provider provisioning adequate backbone capacity and restricting priority usage of that capacity by technical or economical methods.

In connectionless networks, the premium service levels can be provided by employing simple static priorities. Service providers can ensure sufficient capacity to guarantee certain quality of service by restricting the rate of high-priority packets generated by customers by lowering priority indicators randomly on excessive high-priority packets. Such rate-based priority mix controls can also be inserted between service providers at exchange points.

To avoid reordering packets when statistical priority mix controls are used, the priority should effect only drop policy, i.e. when a choice is made which packet to drop from an overflowing queue. The higher priority packets should not have any advantage in placement in queues (in any case, such reordering is undesirable because high-priority TCP streams would fill the queues effectively increasing the latency to the same level as in the case of traffic without reordering). A proper gateway implementation would also allow network operators to ensure that at least some fraction of bandwidth is always available to low-priority traffic.

Another approach for restricting priority usage is to count high-priority packets and charge premium rates for generating such packets.

The conclusion, therefore, is that resource reservation is not necessary or particularly advantageous for carrying realtime audio and video traffic, nor for providing premium service to certain customers. On the other hand, it has numerous drawbacks and limitations, such as: requiring fat-state switching (which was shown above to lack scalability necessary in a growing global network), and necessitating overbuilding of networks due to significantly worse modes of degradation under heavy load. Routing protocols required by resource reservation are also significantly more complicated and therefore much more likely to have faulty implementations. In other words, a network featuring resource reservation would be significantly more expensive and less reliable than a pure best-effort delivery network of the same capacity and providing comparable services to its users.