Minimizing ossification risk is everyone’s responsibility

In the 1990s, the world's software engineers scrambled to adapt to the terrifying idea that the year might not start with "19". This so-called Y2K bug is one of the most famous examples of ossification: when you make an assumption that just because something doesn't change — or hasn't for a long while — that it never will.

Y2K is a particularly egregious example because the change was easily foreseeable, but these risks are also inherent in the capabilities we offer as an internet infrastructure provider. Because we offer our customers low-level access to the internet’s protocols, we need to collectively ensure that we don’t unintentionally harm the internet’s future. Let’s explore the role we all play in this together.

Evolution makes the internet successful

The internet is remarkably robust. One of the reasons networking protocols that were designed in the 1970s, 80s, and 90s are still so useful is that they were designed for evolution. Their designers anticipated future change and intentionally included ways to add new features and change aspects of the protocol over time, without breaking the internet.

This is critical, because it’s impossible to upgrade everyone on the internet at the same time; it needs to be possible to introduce changes gradually, without harming communication where only one party understands the change. Usually, this is done through extensibility and versioning mechanisms. Evolvability has allowed us to introduce IPv6 (slowly) as a response to IPv4 address exhaustion. It enables new versions of TLS to be introduced to improve security on the web and in other applications. It also allowed HTTP/2 to improve performance for the web, and for new features to be introduced using headers, methods and status codes. 

Ossification prevents the internet from evolving

When applications, network devices, or other parts of the internet constrain the use of protocol versioning and extensibility, the resulting ossification means that the “joints” of the protocols that provide flexibility are being “rusted” into place, so that they can’t be moved anymore. This often happens when an implementer or user of the protocol assumes that it won’t change. 

For example, if a Web Application Firewall (WAF) were to deny any request that had a header whose value contains a string like target=value, it would be just a little more difficult for browsers to introduce new headers. Seems unlikely? It’s already happened.

TLS 1.3 is another casualty of ossification; it specifies that the version string in the protocol header be set to the value for TLS 1.2, because some servers didn’t anticipate a version higher than what they supported. This “version intolerance” causes complexity and risk in deploying new protocols.

When ossification happens, it makes it more difficult to introduce new features to the protocol. Eventually, a protocol that becomes too ossified and needs to be replaced. This is what’s happening now to TCP; so many network devices try to “help” TCP connection performance by making assumptions about how it works (e.g., so-called “WAN accelerators”) that a whole new approach — QUIC — was necessary. These “helpful” boxes were often hurting performance and reliability, because their designers weren’t talking to the people trying to introduce changes or optimize connections from the endpoints.

Managing ossification risk with Fastly

Fastly exposes low-level protocol details to our customers’ code in VCL and Compute@Edge. This is a design choice; exposing this information allows you to build more capable systems on top of Fastly. However, we ask our customers to understand and avoid making decisions and assumptions that will inadvertently ossify these protocols, just like we do internally. 

In particular, assumptions about how clients behave or what “normal” traffic looks like are risks for ossification. While your assumption might work out in the short term, future changes (e.g. by browsers) can invalidate those assumptions, causing your application to fail in unpredictable ways.

Often, ossification risk is encountered when building things like client classifiers and WAFs that change how your site works based upon protocol specifics. If you use our WAF or our built-in device detection mechanisms (e.g., client.class.*, client.platform.*, and client.display.* in VCL), we manage much of the ossification risk for you. However, if you build one of these capabilities yourself, you may unintentionally contribute to internet ossification.

In particular, rejecting requests based upon low-level protocol metadata like the HTTP version, HTTP request headers, and TLS, TCP and QUIC connection information can increase ossification risk. That risk is compounded if there isn’t an effective feedback channel, because when those who are affected by a limitation can’t get in touch with you, they work around the problem  — potentially making that workaround a permanent part of the internet. 

Fortunately, it’s possible to build these kinds of features and products in a responsible way. You can do this by:

  • Continuously testing with a wide variety of early-release browsers and other HTTP clients, to catch interoperability issues early.

  • Regularly checking browser bug queues for mentions of your product.

  • Making sure that your product is clearly identified in the protocol, so that you can be easily found.

  • Making sure that browser and other client developers can get in touch with you when necessary — e.g., using a support channel, bug queue or dedicated email address.

  • Tracking development of the relevant protocols for changes that might violate your assumptions. Good places to start include the IETF HTTP Working Group and the IETF TLS Working Group. There is also a dedicated http-grease list for discussion and notification of ossification-related issues in that protocol.

Preventing ossification requires all actors on the internet to work together. While there will always be some sites and programs that misuse protocol extension points, minimizing them helps assure that the internet can continue to smoothly evolve and meet future challenges.

Mark Nottingham
Senior Principal Engineer
Published
Want to continue the conversation?
Schedule time with an expert
Share this post
Mark Nottingham
Senior Principal Engineer

Mark Nottingham has helped to define and develop the web and the internet since the late 90s. He's written, edited, or substantially contributed to more than 30 IETF RFCs and W3C recommendations about topics like HTTP, caching, linking, web architecture, privacy, and security.

As chair of the the HTTP Working Group since 2007, he has overseen the evolution of the foundational protocol of the web, notably including HTTP/2. As chair of the QUIC Working Group, he oversaw the creation of HTTP/3 and the evolution of internet transport. He has also served in internet governance bodies, including the Internet Architecture Board and the W3C Technical Architecture Group.

Currently, he’s part of the Office of the CTO at Fastly, and studying Communications Law at Melbourne Law School. Mark is married to Anitra with two sons, Charlie and Bennet. They live in Melbourne, Australia.