Technical Standards Bodies are Regulators
Wednesday, 1 November 2023
This post is part of a series, The Nature of Internet Standards.
There are lots of ways to view what Internet standards bodies like the IETF and W3C do. They are horizontal agreements between competitors as well as mission-driven public-good charities. One might believe they’re the homes of innovation that brought us the Internet and the Web, or that they’re boring, ineffective and slow committee talking shops. Some see them as vibrant, diverse communities, while others believe they’re completely dominated by big tech.
My preferred way of thinking of them these days, however, is as regulators. Just like the FTC in the US, the eSafety Commissioner in Australia, or the ICO in the UK, Standards Developing Organizations (SDOs) have a fundamentally regulatory aspect to them, and considering them in this way clarifies how they relate to Internet governance.
In particular, it helps to understand what kind of regulator they are, what tools they use, and the nature of the regime they operate within.
A specific kind of regulator
When most people think of a regulator, they assume it’s always state-backed; sovereign power (and hopefully a democratic mandate) imbues the regulator with legitimacy. As Julia Black put it back in 2002:
The core understanding that many have of ‘regulation’ is some form of ‘command and control’ (CAC) regulation: regulation by the state through the use of legal rules backed by (often criminal) sanctions. ‘CAC’ has also however become shorthand to denote all that can be bad about regulation, including poorly targeted rules, rigidity, ossification, under- or over- enforcement, and unintended consequences.
Modern conceptions of regulation are much more expansive (or ‘decentered’), encompassing not only public (government) regulation but also regulation by private actors. For example, lex mercatoria – commercial law and customs followed by merchants – goes back to at least medieval times, and is now considered a kind of regulation. States regularly defer to such ‘soft law’, and while it can always be overridden in a single jurisdiction by legal power, policymakers have strong motivations to avoid over-regulating areas that are capable of self-regulation.
Further complicating Internet regulation is its global span, which means that more than one state is involved. Transnational Private Regulators (TPRs) are non-government regulators who work across national boundaries.
Internet SDOs are often used as examples of TPRs. Other common examples include organisations like the Forestry Stewardship Council, the Fairtrade Foundation, the International Accounting Standards Board, and the ISEAL Alliance.
Caffagi identified a few factors that have “caused and helped to accelerate the emergence of TPRs”:
- Because “many goods and services today transcend national boundaries [they] can hardly be regulated by national regulations. This is particularly the case with global public goods […] for which international regulatory co-operation is substantially needed to avoid a ‘race to the bottom’ between domestic regulations.” This is very much the case for the Internet.
- “There are markets that exhibit fast-changing dynamics [that are] difficult for public policy makers to try to regulate[.] In particular, this is the case of high-tech and knowledge-intensive markets [which] effectively leads policymakers to rely on private parties, at least for the definition of implementing measures and technical specifications.”
- Finally, “there are policy problems that inevitably require heavy reliance on the expertise of private actors, [who] are the most informed parties, or the players in the best position to deal with a given failure, or simply the only parties holding control over central essential resources.”
Importantly, the legitimacy (and therefore authority) of a TPR isn’t based on democracy – inherently they have no demos so they cannot be democratic in the sense that a state is. Instead, they draw on other sources of legitimacy, including their input (who participates), their output (what impact they have), and their throughput (what processes they use to assure fair and good outcomes).
With unique regulatory tools
The regulatory tools available to Internet SDOs are specific and limited – they write down technical specifications that, on a good day, get reflected in code.
This is ‘architectural regulation’, according to Lessig. It sits alongside other modalities of regulation like law, norms, and markets. Where the FTC uses law, the IETF uses architecture – shaping behaviour by limiting what is possible in the world, rather than imposing ex post consequences.
While much of regulatory theory and practice is taken up with issues like monitoring and enforcement, architectural regulation doesn’t need those tasks to be performed; the best approximation is conformance testing (which the IETF and W3C don’t formally do anyway; they certainly don’t attempt certification).
Another interesting aspect of this form of regulation is its quasi-voluntary nature. Internet standards are optional to adopt and implement; no one is forcing you to do so. However, if they’re successful and widely adopted, they do constrain your behaviour while you’re on the Internet, because everyone else is following them. In that sense, they are mandatory.
Architectural regulation of the Internet is also constrained in how it can introduce change. While a law can be repealed or overridden by a newer law, Internet protocol standards have to consider the dependencies that people already have on infrastructure; we can’t have a ‘flag day’ where we change how the Internet works. Instead, we have to carefully extend and evolve it, working within the constraints of what people already do, because once code is deployed, we lose control.
These features provide interesting advantages to SDOs as regulators. While one might see a non-state regulator without an enforcement problem as too powerful, standards’ lack of binding force means that an SDO can’t just impose its will; its product has to be proven by market adoption. A successful, widely adopted standard is (qualified) proof of cooperation, and thus has gained legitimacy at the same time it becomes binding.
In a large regulatory space
If we step back from this, we can now consider the context of this regulation - Internet Governance overall. Plenty has been written about this that I won’t attempt to summarise, but there are a couple of aspects that I’d like to point out.
First of all, there are (obviously) other regulators present too – legal regulators especially (from various governments around the world), but also others using various combinations of the regulatory modalities.
Second, Internet Governance is polycentric (also referred to as ‘regulatory pluralism’) - there is no hierarchy and no regulator can tell another what to do. There are many sources of power (of various natures) that interact in different ways – sometimes reinforcing each other, occasionally conflicting.
Lessig talks about this (with ‘constraints’ being a synonym for ‘regulators’):
The constraints are distinct, yet they are plainly interdependent. Each can support or oppose the others. Technologies can undermine norms and laws; they can also support them. Some constraints make others possible; others make some impossible. Constraints work together, though they function differently and the effect of each is distinct. Norms constrain through the stigma that a community imposes; markets constrain through the price that they exact; architectures constrain through the physical burdens they impose; and law constrains through the punishment it threatens.
Third, the regulatory space is also fragmented. Information, authority, responsibility, and capacity to regulate are dispersed unevenly across multiple regulators. As Scott points out, ‘[r]elations can be characterized as complex, dynamic horizontal relations of negotiated interdependence.’
This means that no regulator in the space is truly independent. Standards have to operate in the legal contexts where they’re deployed; laws need to take the reality of the deployed Internet into account. Each party can act unilaterally, and might even meet their immediate goals, but the reaction to imprudent actions might be worse than the original issue they were trying to address.
Overall, this is healthy. Power is not concentrated in any one institution. States are able to claim sovereignty over what happens inside their borders, but if they differ too much from the global norm, they put at risk the economic and cultural benefits of being part of the global Internet.
What does this mean for the IETF and W3C?
Accepting the regulatory nature of SDOs leads to a few conclusions.
First, the IETF and W3C need to coordinate more closely with other regulators – especially national regulators who have their sights set on taming particular aspects of the Internet.
That doesn’t mean that SDOs should defer to national regulators – far from it. I’ve heard more than a few conversations where technical people think they need to implement the law in protocols. This is not the case, because laws are generally limited to a specific territory; countries can’t regulate the entire Internet by themselves. Furthermore, laws typically don’t apply to the standards themselves; instead, they apply to their use.
It doesn’t even mean that standards work should block on getting input from policymakers (just as policymakers don’t block lawmaking on feedback from SDOs!); doing so would introduce problematic incentives, muddy the technical decision-making process, and remove many of the advantages of private regulation.
It does mean that technical discussions should be informed by ‘policy considerations’, even if they’re ultimately dismissed. Understanding how legal regulators see the Internet, what their goals are, and how they attempt to use the regulatory tools in their hands helps technical regulators evaluate what additional constraints are likely to be layered onto the Internet. That might result in alignment between technical regulation and legal regulation, but this is emphatically not a requirement – in some cases, they might conflict.
Those conflicts should be avoided when they’re unnecessary, so SDOs need to do their part to inform legal regulators as well, particularly when their proposals have impact on the architecture.
This is not a new perspective – there has been considerable discussion in both the IETF and the W3C recently about ‘policy engagement.’ What’s different here is the emphasis on being a peer of other regulators, rather than automatically subject to them. That is fundamentally different than the relationship that most corporate policy units have with regulators, for example.
Second, this view reinforces the notion that regulation by technical standards bodies has very specific sources of legitimacy – the technical expertise that it embodies, and the demonstrated success of its output. That legitimacy might be enhanced by the unique global scope of these bodies – unlike national regulators, they are responsible for the entire Web and Internet.
That suggests the positions taken by these bodies need to be focused on their areas of expertise, rather than trying to draw on other sources of legitimacy (for example, pseudo-democratic ones, or notions of openness, although the latter does enhance their legitimacy). This is well-recognised in the IETF, where arguments like Pervasive Monitoring is an Attack are couched in technical terms, not value-driven ones.
Third, the polycentric and fragmented nature of the regulatory space suggests that it’s entirely appropriate for architectural regulators like SDOs to focus on areas where their tools are most effective.
For example, the HTTP Cookie specification has been working towards eradicating third-party cookies for some time, because they’re horrible for privacy. Some point out that this doesn’t address the privacy issues with first-party cookies - a site you’re interacting with can still track your activity, profile you, and so on.
That doesn’t mean that we should back away from regulating third-party cookies with architecture; they’re extremely amenable to this form of regulation (because of the user agency of the browser), and legal regulation of third-party cookies has proven difficult. On the other hand, regulating first-party privacy abuses on the Web with architecture is hard – if you interact with someone, you’re giving them your data – but legal regulation of how entities handle first-party data is on much firmer ground (provided there is a political will to do so).