Openness in Internet Standards: Necessary, but Insufficient
Friday, 5 July 2024
This post is part of a series, The Nature of Internet Standards.
The phrase ‘Open Standards’ is widely used but not well-understood, to the point that the Open Source Initiative calls it ‘a feel-good term with no actual technical meaning.’
As we’ll see that’s an overstatement, but there are still significant confusion about and differences in what ‘open standard’ means in practice. Let’s take a look at what openness in standards is, with a focus on whether and how it helps to legitimise the design and maintenance of the Internet.
Defining Open Standards
First, let’s review what some governments think open means in the context of standards. The phrase comes up in a number of legal documents, mostly focused on government procurement and trade.
In the US, OMB Circular A-119 fills that role, and is also referenced in some aspects of competition law. It says openness in standards means:
The procedures or processes used are open to interested parties. Such parties are provided meaningful opportunities to participate in standards development on a non-discriminatory basis. The procedures or processes for participating in standards development and for developing the standard are transparent.
The corresponding document in Europe is Regulation 1025/2012, and says a standard is open when
the technical specifications were developed on the basis of open decision-making accessible to all interested parties in the market or markets affected by those technical specifications […]
In both cases, openness is referring to the process used to produce a standard: it needs to be open to all potentially interested parties. Note that these definitions don’t say anything about the decision-making process itself, just access to it.
In contrast, the UK Government’s Open Standards Principles focuses on availability in its definition of open standards:
Open standards give users permission to copy, distribute and use technology freely or at low cost.
In practice, these views don’t really conflict: the procurement guidance from the US and EU also talk about availability, and the UK guidelines cover ‘fair and transparent processes.’ They do, however, use ‘open standard’ to mean different things, which can cause confusion.
There are plenty more of examples from legislation around the world that also touch on these two themes. Perhaps pragmatically, the Internet Society avoids conflict and defines them has having both attributes:
Open standards are publicly available and developed via processes that are transparent and open to broad participation. In contrast, proprietary standards are privately owned by one or more entities that control their distribution and access.
But what do these things actually mean in practice for Internet standards?
1. Availability as Openness
Let’s start with availability, which begs the question: for what purpose? The UK definition lists ‘copy, distribute, and use’, and I’d add another – permission to change the standard.
The first sense of a standard’s availability is straightforward: what do you need to do to get a copy of it, and can you give it to other people? This isn’t a small thing; many standards bodies charge for their products, and substantial fees can be a barrier not only to implementation, but also independent review. Thankfully, it’s long been a norm that Internet and Web standards are free to obtain over the Internet itself, and can be freely distributed.
Second, there’s the question of availability in the sense of implementing open standards – namely, do you need to license any patents to do so? The intricacies of patents in the IETF and W3C have some interesting properties, but that’s a topic that’s too big to cover here. Suffice it to say that both have measures in place to discourage ‘submarine’ patents from those who participate in the standards process.
With those meanings covered, let’s consider the ability to modify an Internet standard. If you want to publish a derivative of an IETF RFC or W3C Recommendation, you’ll need to get permission, because both organisations retain copyright in their works, using very different mechanisms.
In the IETF, the original authors of the RFC retain copyright, giving a license to the IETF Trust to allow it to be published. In theory, that allows the authors to publish a derivative work on their own or give someone else a license to do so. In practice this doesn’t happen, because most RFCs are collaborations between much larger groups than just the authors, and getting sufficient permission sorted out is impractical. Recently, there was a proposal to allow modification of IETF RFCs, but that seems to have caused enough discomfort in the community to make its adoption unlikely.
W3C, meanwhile, requires authors to transfer copyright to the Consortium. By default, its document license also doesn’t allow modifications, much like the IETF. Interestingly, however, some W3C documents have been published under a more permissive license that does allow ‘forking’ the specification. Most famously, this was used to shift the HTML specification out of the W3C’s hands and into the WHATWG.
How, though, does the ability to modify a standard impact its openness?
Think of copyright as a centralising force: the entity that controls the definition of the standard controls its future, because they have a monopoly on it. As I’ve written before, centralization can be used for good – especially if it’s to encourage interoperability, and even more so if it’s used to gate-keep in the interest of the common good with appropriate governance. When a standards body is well-run, is responsive to the community using its work, and financially healthy, I’d argue it’s probably filling this role well.
If it isn’t one or more of these things, however, that monopoly becomes a liability. For example, if the standards body is captured by malevolent interests or simply loses financial viability, it can put its community of users in an awkward position indeed. Even absent those extreme circumstances, locking a standard into one venue creates a moral hazard: that the standard will remain there not because that venue is the best steward for it, but because it controls the copyright.
2. Openness of Standards Processes
The other senses of ‘openness’ in standards pertains to the process used to produce them. How open a process is perceived to be can result in the SDO’s legitimacy – and their monopoly over the standard – being either bolstered or questioned.
On the face of it, the mechanism is straightforward: a standards process that is ‘accessible to all interested parties’ enhances input legitimacy by including more diverse views into the work. However, it’s important to inspect some assumptions that are commonly made about the impact of this kind of openness.
First, just because someone has the theoretical ability to participate, it doesn’t mean that they actually can. Standards work has a notoriously steep learning curve; being effective requires great technical expertise, significant time, and frequent international travel to build influence, relationships and understanding. Yes, SDOs use online tools like mailing lists, videoconferencing, and GitHub to allow remote participation, but they are a poor substitute for face-to-face interaction, hallway discussions and sharing a meal (and, often, a drink). And, even people who follow Internet standards full time aren’t aware of every development in every specification, because there’s simply too much going on.
Together, this means that the number of people actually paying attention to a particular standards development can be quite small, unless it captures the broader imagination. It also means that only those with sufficient incentive to invest resources will participate in a long-term effort.
Second, even those that show up might not have influence over the outcome. While standards decisions might be based upon consensus, there are many pitfalls in its application, and the combination of a large body of standards work and few experts (per above) often means that a small group of specialised experts ‘owns’ each established area of interest, acting as gatekeepers to change. If you have an ‘outsider’ idea that is counter to the thinking of those experts, you’re unlikely to get it successfully adopted.
Third, standards bodies are not and fundamentally cannot be representative, in a democratic sense. They are not accountable to an electorate, there is no demos, and thus they have no meaningful authority based upon whoever does show up. At most, openness creates a presumption that those with the knowledge, resources, interest, and dedication had the opportunity to show up.
In combination, these factors constrain the input legitimacy gained by openness so much as to make it negligible. In practice, standards organisations derive most of their legitimacy from throughput (e.g., checks and balances in their processes) and output (the success of what they actually produce).
That’s not to minimise the importance of diversity and external engagement in SDOs. Efforts to both onboard new people into these cultures and to reach out to those affected by their work is critical, in my opinion (so much so that I wrote a whole RFC about it). However, standards bodies should never consider themselves to have democratic legitimacy, think that mere openness is enough to justify a decision, or that their processes don’t require regular scrutiny and improvement.
An Aside on the ‘M’ Word
A word that often comes up in conjunction with ‘open standards’ is ‘multistakeholder.’
This Rorschach test of a word has bugged me for a while – it means so many things to so many people. So much, in fact, that I recently went to a conference on multistakeholderism in Internet governance to delve into why this term is so often used but so rarely defined.
My suspicions were confirmed: this is a word that means very little, except when you use it in the context of its opposite: multilateral governance (for example, through the United Nations). Really, multistakeholder just means not multilateral.
In particular, being characterised as multistakeholder does not imply that there are specific classes of ‘stakeholders’, or that they have specific rights or representation in the process. So while Internet standards are multistakeholder in the sense that they’re not government-led, that says nothing about their nature beyond that fact.
If this topic is interesting to you, I highly recommend reading the transcript of Milton Mueller’s keynote from that conference.
Necessary, but Insufficient
All of this leads to the conclusion that openness of process is clearly necessary to legitimise the work of SDOs and help assure that anti-competitive behaviour doesn’t arise. However, it isn’t sufficient to assure good governance.
Furthermore, it’s reasonable to say that standards bodies that choose to keep a monopoly on the specifications themselves (thereby withholding that dimension of openness) necessarily subject themselves to increased scrutiny regarding the quality of their governance. What good governance looks like in Internet standards is a theme that I’ll return to.