Home Top Global NewsTechnology Why QUIC is not a replacement for TCP • The Register

Why QUIC is not a replacement for TCP • The Register

by Ozva Admin

systems approach Some might say that there is a chance that QUIC will start to replace TCP. This week I want to argue that QUIC is actually solving a different problem than TCP, so it should be seen as more than just a replacement for TCP.

It may well be that for some (or even most) applications QUICK it becomes the default transport, but I think that’s because TCP has been pushed into roles it wasn’t originally intended for. Let’s take a step back to see why I make that claim.

In 1995, Larry Peterson and I were working on the first edition of Computer Networks: A Systems Approachand we had reached the point of writing the chapter on transport protocols, which we titled “End-to-End Protocols”.

In those days, there were only two prominent transport protocols on the Internet, UDP and TCP, so we gave each of them their own section. From our book is intended to teach networking principles rather than just the contents of RFCs, we frame the two sections as two different communication paradigms: a simple demultiplexing service (exemplified by UDP) and a reliable byte stream (TCP).

But there was also a third paradigm that Larry argued we needed to cover, for which there wasn’t really a well-known example of an Internet protocol: Remote Procedure Call (RPC). The examples we used to illustrate RPC in 1995 seem quaint now: SunRPC and a homegrown example from Larry’s research at the time on the x-kernel. These days, there are many options for RPC implementations running over IP, with gRPC being one of the best known examples.

The examples we used to illustrate RPC in 1995 look quaint now

Why do we feel the need for a whole section on RPC, when most other networking books only cover TCP and UDP? For one thing, RPC was one of the key research areas in the distributed systems community at the time, with the 1984 paper of Nelson and Birrell driving a generation of RPC-related projects. And from our point of view, a reliable byte stream is not the right abstraction for RPC.

The core of RPC is a request/response paradigm. You send a bunch of arguments from the client to the server, the server performs some calculations on those arguments, and then returns the results of the calculation. Yes, a reliable byte stream might help get all arguments and results across the network correctly, but RPC is more than that.

Leaving aside the problem of serializing arguments for transmission over a network (which we also cover later in the book), RPC isn’t really about transferring a stream of bytes, it’s about sending a message and getting a response. So it looks a bit more like a datagram service (as provided by UDP or IP), but it also requires more than just unreliable datagram delivery.

RPC needs to handle lost, out of order, and duplicate messages; an identifier space is required to match requests and responses; and message fragmentation/reassembly must be supported, to name a few requirements. Out-of-order delivery, which prevents a reliable byte stream, is also desirable for RPC. There may be a reason why so many RPC frameworks emerged in the 1980s and 1990s: people in distributed systems needed an RPC mechanism, and nothing was available in the standard TCP/IP protocol suite. (RFC 1045 actually defines an experimental RPC-oriented transport, but it never seems to have succeeded). Nor was it obvious then that TCP/IP would become as pervasive as it is today. Therefore, some RPC frameworks (for example, DCE) were designed to be independent of the underlying network protocols.

The lack of support for RPC in the TCP/IP stack set the stage for QUIC.

When HTTP came along in the early 1990s, it wasn’t trying to solve an RPC problem so much as an information exchange problem, but it did implement request/response semantics. The designers of HTTP, lacking obviously better options, decided to run HTTP over TCP, with poor performance in early versions due to the use of a new connection for each “GET”.

A variety of tweaks were introduced to HTTP, such as pipelining, persistent connections, and the use of parallel connections to improve performance, but TCP’s reliable byte stream model was never perfect for HTTP.

With the introduction of Transport Layer Security (TLS) causing yet another set of back-and-forth exchanges of cryptographic information, the mismatch between what HTTP needed and what TCP provided became increasingly apparent. This was well explained in the QUIC 2012 design document by Jim Roskind: Line header blocking, poor congestion response, and additional RTTs introduced by TLS were identified as inherent problems with running HTTP over TCP.

One way to frame what happened here is this: the “narrow waist” of the Internet was originally just the Internet Protocol, intended to support a variety of protocols on top of it. But somehow the “waist” began to include TCP and UDP as well. Those were the only transports available. If you only wanted a datagram service, you could use UDP. If you needed any kind of reliable delivery, TCP was the answer. If you needed something other than unreliable datagrams or reliable byte streams, you were out of luck. But it was too much to ask TCP to be everything to so many higher layer protocols.

QUIC is doing a lot of work: its definition encompasses three RFCs covering the basic protocol (RFC 9000), its use of TLS (9001) and its congestion control mechanisms (9002). But in essence it is an implementation of the third missing paradigm for the Internet: RPC.

If what you really want is a reliable byte stream, like when you’re downloading that multi-gigabyte OS update, then TCP really is well designed for the job. But HTTP(S) is much more like RPC than a reliable byte stream, and one way to look at QUIC is that it is finally handing over the RPC paradigm to the Internet protocol suite.

That will certainly benefit applications running over HTTP(S), including, in particular, gRPC and all those RESTful APIs we depend on.

When we write about QUIC previously, we found it to be a good case study on how to rethink layering a system as requirements become clearer. The point here is that TCP meets one set of requirements, those of a reliable byte stream, and its congestion control algorithms continue to evolve in service of those requirements.

QUIC really meets a different set of requirements. Since HTTP is so fundamental to the Internet today (indeed, it has been argued that it is becoming the new “narrow waist”), it could be that QUIC becomes the dominant transport protocol, not because it exactly replaces TCP. , but because it satisfies the needs of the dominant applications above it. ®

You may also like

Leave a Comment