Faster Protocols and the Future of Next-Generation Firewalls

Written by Dave Robinson Senior Technology Specialist End to End Networks

In 2009, +Google announced that it was developing a new protocol: SPDY, pronounced speedy. It was supposed to improve speeds over traditional HTTP traffic. When I read the announcement, I thought this was Google’s efficiency gone mad.

Google has always tried to push efficiency as far as possible. They made Javascript scream in +Google Chrome, they have some of the most energy efficient data centres in the world, they’ve provided a way to save bytes when downloading web fonts. The list goes on and on. If there is something inefficient out there with the Web, you can bet Google is working on it.

SPDY seemed silly to me though. Granted that SPDY could gain something in efficiency, could they really save all that much? Additionally, was anyone really going to start using SPDY in place of regular old HTTP? Was Google really going to convince web servers and browsers to adopt this new standard. That seemed so far fetched to me that while I could acknowledge that SPDY has some merits, I thought it a waste of time.

That was in 2009. A year later in 2010, Google’s fast-growing browser, Chrome, announced support for SPDY. In early 2011, Google’s own services deployed SPDY. Now one could see that at least from Google’s perspective, all its servers could run SPDY and any Chrome browser out there could take advantage. These little efficiencies would make Google sites seem faster for Chrome users and would also remove some burden from Google’s own servers.

In 2012, Twitter enabled SPDY on its servers, the party hasn’t stopped since. Firefox, Internet Explorer, WordPress, Facebook, Opera, Amazon. The list goes on and on. One by one, everyone fell in line to support SPDY. Then, it was announced that HTTP/2, the successor to the ubiquitous HTTP/1.1 that we all use every day, would use SPDY as the base for its technical specification.

In less than 6 years, Google had turned what seemed like a bad joke in 2009 into the new standard for web communication. HTTP/2 was published as a proposed standard by the IESG (Internet Engineering Steering Group) in February of 2015.

This didn’t happen because of Google’s muscle though. It happened because SPDY really was fast. It included many enhancements over traditional HTTP. Called multiplexing, SPDY allowed for multiple HTTP sessions over a single connection. This greatly reduced the many roundtrips of having to setup each individual session. TCP itself requires three packets to setup a session, then TLS layered on top of that requires another 4 packets. All of this means multiple round trips before any data can be sent. SPDY does it once and all data is sent in multiple sessions on that single connection.

SDPY also reduces the size of the HTTP headers by compressing them and eliminating useless ones. This makes the same requests faster. On top of that SDPY allows for prioritization of some streams (or sessions) over others, and even allows the server to push some data out itself.

The only issue I saw with SPDY was that it required mandatory encryption. Google is all about security, so I wasn’t surprised that this was the case. However, if you run a site that simply puts information on the web, with no need for user input or interaction, is there any need to encrypt that data? For example, the Canadian government has a site that allows people to read the various laws online. There is nothing to do on that site other than read law. There is no user input available and the site is open to all. Why should anyone care that the data is encrypted when transmitted over the Internet? This creates the potential for SPDY to be slower than HTTP in certain cases, and it also burdens servers unnecessarily.

As it turns out, others weren’t happy about mandatory encryption either. Cisco voiced major concerns about this because small appliances like routers and switches would be hard pressed to enroll and renew certificates regularly, especially when proper certificates cost money. The issue is complex, but it basically meant that Cisco would have to leave HTTP/2 out of their devices, or utilize it and cause their users to see certificate and security warnings on their browsers when they accessed Cisco devices. Neither is a good option. Thankfully, HTTP/2 decided to part ways with SPDY on this one point and not require mandatory encryption — though it’s unclear if any browsers will allow anything other than an encrypted session.

Though that was my major concern, there were more fundamental issues. It’s not that people didn’t notice those issues, it’s more that we all just accepted them as the fabric of the Internet. They were unalterable realities that restricted our protocols. They were so well known and considered so unsolvable that they just blended into the background and were accepted and forgotten.

The more fundamental problems that we have on the Internet are TCP problems. TCP isn’t going anywhere and while it can be improved here and there (and it has been over the years) it can only be altered so much. Not only that, but TCP is deeply embedded into operating systems. Changing it generally means replacing kernels or upgrading entire operating systems. This is not easily done, and it’s certainly not something that can be globally done. It’s a major problem.

Consider video streaming. When you’re watching a video on a congested Internet connection, you are probably going to lose some packets. TCP requires that those packets be retransmitted. Even if you are into the 10th second of video, TCP would retransmit a frame in the 9th second if it was missed. The problem is that this adds to the congestion that’s already taking place — all for segments of data that you no longer care about. TCP is generally poor at a number of other things too, and though some of these can be altered, upgrading the world’s operating systems is a hard goal to achieve.

This is why Google is finished with SPDY and is now working on QUIC. QUIC will solve some of the problems associated with TCP by operating over UDP instead. Paradoxically, QUIC serves as a second transport layer on top of UDP, but will be faster than simply using one transport layer in TCP. QUIC will function in much the same way as SPDY did, but without the limitations of TCP, QUIC can be much faster.

As opposed to TCP+TLS+SPDY’s 8 packets to setup a connection, QUIC will require only 2 packets (1 round trip) to setup an initial session. On subsequent connections, QUIC can just send data with no connection setup at all using cached information.

Since UDP is a connectionless, unreliable protocol, it has no retransmissions. QUIC streams can layer reliability on top of UDP or can remain unreliable. This makes QUIC flexible in that it can require retransmissions like TCP today, or it can be the protocol we need for streaming video with no retransmissions. This isn’t 100% clear from the spec, but there’s no reason not to do this and the spec at least seems to hint at this.

TCP limits SPDY streams where there is trouble on the line. One stream suffering from packet loss will affect all other streams. This isn’t the case with QUIC. Since it’s carried by simple UDP, QUIC can be fully in control and allow packet loss to affect only the stream that experienced the loss.

QUIC also does some very clever error correction. It can send FEC (Forward Error Correction) packets at regular intervals. This is explained kind of like RAID, but for packets. A parity packet can be sent so that if there is an error, the lost packet can be computed from the previous packets and the FEC packet. No need for retransmission at all. This is very clever indeed.

Lastly, since QUIC doesn’t rely on TCP, it’s not bound to very specific sessions on source and destination IPs and port numbers like TCP is. A QUIC connection is based on a connection ID that has nothing to do with IP addresses or port numbers. This means that even if your IP address changes, or your firewall experiences a failover to a backup unit, your QUIC session can remain active without having to setup a new one. In TCP this is very hard to do with firewalls today and it certainly can’t be done if IP addresses change.

The downside to QUIC is that header or transport information is encrypted. This means that QUIC sessions are essentially unreadable to firewalls. There will be no NGFW that can block Facebook chatting while allowing you to view Facebook pages. There won’t even be any packet inspection that can be done to troubleshot issues. Any packet investigation will have to be done at the endpoints, and probably still will require new tools to do it. This is ugly, but Google maintains that once firewalls and other middle devices start looking at the headers of the protocol, they’ll start providing blocking and forwarding features that rely on those headers, making the protocol difficult to upgrade QUIC without breaking certain firewall implementations. This is a good point, but it seems to me that troubleshooting, packet capture, and the potential for NGFWs to block certain aspects of QUIC are more important than the risk of new QUIC versions introducing breaking changes. Even if they do introduce such changes, device vendors will have to upgrade their software. This is certainly a slower process, but seems like a fair middle ground to me. The issue is also mitigated if Google gets QUIC right out of the gate, rather than making major changes to the protocol down the road.

Keep an eye out for QUIC, it seems like it’ll solve many of our current protocol limitations, but it could also disrupt the burgeoning NGFW market. If Cisco caused some major concerns over HTTP/2’s mandatory encryption, just watch the fireworks when the NGFW vendors have to deal with QUIC.

QUIC is very new, so don’t expect to see anything happen all that, well, quick. However, Google did an amazing job with SPDY and you can expect that QUIC will be no different.

//cc: +Roberto Peon