“There are no standards police.”
In other words, even if you do consider Internet standards to be a regulatory force, there is no enforcement mechanism. One of their key characteristics is that they’re voluntary. No one forces you to adopt them. No one can penalise you for violating a MUST; you have to want to conform.
Of course, you can still feel compelled to do so. If an interoperability standard gets broad adoption and everyone you want to communicate with expects you to honour it, you don’t have many options. For example, if you want to have a Web site, you need to interoperate with browsers; most of the time, they write down what they do in standards documents, and so you’ll need to conform to them.
But that’s the successful path. For every HTTP or HTML or TCP, there are hundreds of IETF RFCs, W3C Recommendations, and other standards documents that haven’t caught on – presumably much to their authors’ dismay. Adopting and using those documents was optional, and the market spoke: there wasn’t interest.
This aspect of the Internet’s standards has been critical to its success. If people were forced to adopt a specification just because some body had blessed it, it would place immense pressure on whatever process was used to create it. The stakes would be high because the future of the Internet would be on the line: businesses would play dirty; trolls would try to subvert the outcomes; governments would try to steer the results.
Of course, all of those things already happen in Internet standards; it’s just that the stakes are much lower.
So, voluntary adoption is a proving function – it means that not all of the weight of getting things right is on the standardisation process, and that process can be lighter than, for example, that used by the governments or the United Nations (I’ll get back to that in a minute). That’s important, because it turns out that it’s already incredibly difficult to create useful, successful, secure, private, performant, scalable, architecturally aligned technical specifications that change how the Internet works within all of the other natural constraints encountered; it’s threading-the-needle kind of stuff. And we need to be able to fail.
Historically, voluntary standards have been encouraged by governments in their purchasing and competition policies - for example, OMB Circular A-119, EU Regulation 1025/2012, and the EC guidelines on horizontal agreements. Standards bodies are a ‘safe space’ where competitors can cooperate without risking competition enforcement, so long as they follow a set of rules – and one of the biggest rules is that adoption should be voluntary, not mandatory or coerced (at least by those setting the standard).
But it’s no secret that the policy landscape for the Internet has changed drastically. Now, there is increasing interest in using interoperability standards as a mechanism to steer the Internet. Academics are diving deep into the cultures and mechanisms of technical standards. Civil society folks are coming to technical standards bodies and trying to figure out how to incorporate human rights goals. Regulation is coming, and policy experts are trying to figure out how to get involved too.
This influx has caused concern that that these relative newcomers are mistakenly focusing on standards as a locus of power when, in fact, the power is expressed in the adoption of a standardised technology. For example, Geoff Huston recently wrote an opinion piece along these lines.
I have no doubt that some still come to the IETF and similar bodies with such misapprehensions; we still have to remind people that ‘there are no standards police’ on a regular basis. However, I suspect that at least the policy people (including regulators) largely understand that it’s not that simple.
That’s because modern regulators are very aware that there are many influences on a regulatory space. They want to learn about the other forces acting on their target, as well as persuade and inform. Similarly, those who are involved in policymaking are intensely aware of the diffuse nature of power. In short, their world view is more sophisticated than people give them credit for.
(All that said, I’m still interested and a bit nervous to see what Global Digital Compact contains when it becomes public.)
Another concern is that governments might try to influence Internet standards to suit their purposes, and then exert pressure to make the results mandatory – short circuiting the proving function of voluntary standards.
Avoiding that requires separating the legal requirement from the standards effort, to give the latter a chance to fail. For example, MIMI may or may not succeed in satisfying the DMA requirement for messaging interop. It is an attempt to establish voluntary standards that, if successful in the market, could satisfy legal regulatory requirements without using a preselecting standards venue.
Of course, that pattern is not new – for example, accessibility work in the W3C is the basis of many regulatory requirements now, but wasn’t considered (AFAIK) by regulators until many years after its establishment.
Because of the newly intense focus on regulating technology, there’s likely to be increasing pressure on such efforts: both the pace and volume of standardisation will need to increase to meet the requirements that the standards bodies want to attempt to address. I suspect aligning the timelines and risk appetites of standards bodies and regulators are going to be some of the biggest challenges we’ll face if we want more successes.
So right now I believe the best way forward is to create ‘rails’ for interactions with legal regulators – e.g., improved communication, aligned expectations, and ways for an effort to be declined or to fail without disastrous consequences. Doing that will require some capacity building on the parts of standards bodies, but no fundamental changes to their models or decision-making processes.
This approach will not address everything. There are some areas where at least some regulators and the Internet standards community are unlikely to agree. Standards-based interoperability may not be realistically achievable in some instances, because of how entrenched a proprietary solution is. Decentralising a proprietary solution can face many pitfalls, and may be completely at odds with a centralized solution that already has broad adoption. And, most fundamentally, parties that are not inclined to cooperate can easily subvert a voluntary consensus process.
However, if things are arranged so that when conforming to a voluntary consensus standard that has seen wide review and market adoption is considered to be prima facie evidence of conformance to a regulatory requirement, perhaps we do sometimes have standards police, in the sense that legal requirements can be used to help kickstart standards-based interoperability where it otherwise wouldn’t get a chance to form.
]]>It’s no secret that most people have been increasingly concerned about Internet centralization over the last decade or so. Having one party (or a small number of them) with a choke hold over any important part of the Internet is counter to its nature: as a ‘network of networks’, the Internet is about fostering relationships between peers, not allowing power to accrue to a few.
As I’ve discussed previously, Internet standards bodies (like the IETF and W3C) can be seen as a kind of regulator, in that they constrain the behaviour of others. So it’s natural to wonder whether they can help avoid or mitigate Internet centralization.
I started drafting a document that explored these issues when I was a member of the Internet Architecture Board. That eventually became draft-nottingham-avoiding-internet-centralization, which became an Independent Stream RFC today.
But it was a long journey. I started this work optimistic that standards could make a difference, in part because Internet standards bodies are (among many things) communities of people who are deeply invested in the success of the Internet, with a set of shared end user-focused values.
That optimism was quickly tempered. After digging into the mechanisms that we have available, the way that the markets work, and the incentives on the various actors, it became apparent that it was unrealistic to expect that standards documents – which of course don’t have any intrinsic power or authority if no one implements them – are up to the task of controlling centralization.
Furthermore, centralization is inherently difficult to eradicate: while you can reduce or remove some forms of it, it has a habit of popping up elsewhere.
That doesn’t mean that standards bodies should ignore centralization, or that there isn’t anything they can do to improve the state of the world regarding it (the RFC explores several); rather, that we should not expect standards to be sufficient to effectively address it on their own.
You can read the RFC for the full details. It covers what centralization is, how it can be both beneficial and harmful, the decentralization strategies we typically use to control it, and finally what Internet standards bodies can do in relation to it.
One final note: I’d be much less satisfied with the result if I hadn’t had the excellent reviews that Eliot Lear (the Independent Submissions Editor) sourced from Geoff Huston and Milton Mueller. Many thanks to them and everyone else who contributed.
]]>If you run an online service that’s accessible to Australians, these Standards will apply to you. Of course, if you don’t live here, don’t do business here, and don’t want to come here, you can probably ignore them.
Assuming you do fall into one of those buckets, this post tries to walk through the implications, as a list of questions you’ll need to ask yourself.
I’m going to try to focus on the practical implications, rather than “showing my work” by deep-diving into the text of the standards and supporting legislation. This is based only upon my reading of the documents and a miniscule dollop of legal education; if there are things that I get wrong, corrections and suggestions are gladly taken. Note that this is not legal advice, and the Standards might change before they’re registered.
The first question to answer is whether your service is covered by the Online Safety (Designated Internet Services – Class 1A and Class 1B Material) Industry Standards 2024.
The short answer is “yes, even that one.”
A Designated Internet Service (DIS) is one that allows “end-users to access material using an Internet carriage service.” This is a very broad definition that explicitly applies to Web sites. For simplicity, the remainder of this article will assume your service is a Web site, even though other information services can be a DIS.
In a nutshell, if “none of the material on the service is accessible to, or delivered to, one or more end-users in Australia”, your site is exempt. Otherwise, it’s covered (unless one of the other Codes or Standards takes precedence; see below).
So whether you’re Elon Musk or you have a personal Web site with no traffic, this standard applies to you, so long as it’s available to one Australian person – even if none actually visit. Don’t be fooled by “Industry” in the title. That default page that your Web server comes up with when your new Linux box boots for the first time? Covered. Note that it doesn’t even need to be on the public Internet; things like corporate Intranet sites are covered, as are content-free static sites like those used to park domains.
Given how broadly the legislation and standard are written, combined with how prevalent HTTP and similar protocols are on today’s Internet, it’s also reasonable to say that APIs are covered; there’s no inherent restrictions on formats or protocols in the eSafety standards – in fact, the definition of material in the Act includes “data”.
So, to be safe, any server available on the Internet is covered by the eSafety scheme, so long as it can be accessed by Australians.
Assuming that your site is covered by the Standard, your next step is to figure out whether you need to perform a risk assessment.
Assuming that you’re not running a large commercial web site, a (ahem) “high impact” service (i.e., one that specialises in porn, violent content, and similar), or an AI-flavoured service, there are two interesting categorise that might get you out of performing a risk assessment.
The first is a “pre-assessed general purpose DIS.” You can qualify for this if you don’t allow users in Australia to post any material (including comments), or if posting is “to review or provide information on products, services, or physical points of interest or locations made available on the service.” It’s also OK if they are “sharing […] with other end-users for a business, informational, or government service or support purpose.”1
Does it seem like your site qualifies? Not so fast; that only covers “pre-assessment.” A general purpose DIS is a
website or application that […] primarily provides information for business, commerce, charitable, professional, health, reporting news, scientific, educational, academic research, health, reporting news, scientific, educational, academic research, government, public service, emergency, or counselling and support service purposes.
Unless your site falls cleanly into one of those categories, you don’t have a general purpose DIS.2
The second is an “enterprise DIS.” This is a site where “the account holder […] is an organisation (and not an individual).” Basically, if your users are companies or other organisations and not individual people, you don’t have to do an assessment.
Assuming you need a risk assessment (spoiler: you probably do, to be safe), you
must formulate in writing a plan, and a methodology, for carrying out the assessment that ensure that the risks mentioned in subsection 8(1) in relation to the service are accurately assessed.
The risk referred to is that class 1A or class 1B material will be “generated or accessed by, or distributed by or to, end-users in Australia using the service.” Storage of such material is also included (even if it isn’t accessed).
To answer your next question, class 1A material is “child sexual exploitation material”, “pro-terror material”, or “extreme crime and violence material.” class 1B material is “crime and violence material” and “drug-related material.” There are long definitions of each of these kinds of material in the standard; I won’t repeat them here.
Your risk assessment must “undertake a forward-looking analysis” of what’s likely to change both inside and outside of your service, along with the impact of those changes. It’s also required to “specify the principle matters to be taken into account”, including eleven factors such as “the ages of end-users and likely end-users”, “safety by design guidance”, AI risks, terms of use, and so forth.
Your risk assessment has to be written down in detail. You must also “ensure that [it] is carried out by persons with the relevant skills, experience, and expertise” – although it’s not yet clear what that means in practice or how it will be enforced.3
Once you’ve done a risk assessment, you’ll have a risk profile – one of Tier 1, Tier 2, or Tier 3.
Let’s assume your site has no user-generated content, and you only upload very… normal… content– like this site.4 You’re likely to be Tier 3.
If so, congratulations! Your work is just about done. Sections 34, 40, and 41 of the Standard apply to you – basically, the eSafety Commissioner can demand that you provide them with your risk assessment and how you arrived at it. You also have to investigate complaints, and keep records.
If you’re not Tier 3 – for example, you blog about drugs or crime, or you allow user uploads or comments, there are a whole slew of requirements you’ll need to conform to, which are well out of scope for this blog entry (since I’m mostly interested in the impact of regulation on small, non-commercial sites). Tip: get some professional help, quickly.
Keep in mind that we’ve gone through just one of the proposed Standards above. The other one is about e-mail and chat services, so if you run a mail server (of any flavour – maybe even on your infrastructure?), a chat server (e.g., Prosody, jabberd), or Mastodon server, buckle up.
There are also another set of Industry Codes that cover things like hosting services, app stores, social media, search engines, and operating systems, if you happen to provide one of those.
Keep in mind that if you change anything on your site that impacts risk (e.g., adding a comment form), you’ll need to re-assess your risk (and likely conform to new requirements for reporting, etc.).
There are a lot of small Internet services out there – there are a lot of IP addresses and ports, after all. I suspect many people running them will ignore these requirements – either because they don’t know about them, they think they’re too small, that the eSafety Commissioner won’t care about their site, or they’re willing to run the risk.
What is the risk, though?
Section 146 of the Online Safety Act 2021 sets the penalty for not complying with an Industry Standard at 500 penalty units – currently, AU$156,500 (a bit more than US$100,000).
In practice, the eSafety Commissioner is unlikely to come after any site if its content isn’t problematic in their eyes. Whether you want to rely upon that is up to you. Because the legislation and standard don’t have any exemptions for small services – even with limited audiences – you are relying upon their discretion if you don’t have a risk assessment ready for them.
Improving online safety is an important task that needs more focus from society, and I’m proud that Australia is trying to improve things in this area. I’m critical of the eSafety Industry Codes and now Standards not because of their objective, but because of their unintended side effects.
Both the enabling instrument and this delegated legislation are written without consideration for the chilling effects and regulatory burden they create on parties that are arguably not its target. Requiring professional risk assessment raises costs for everyone, and creates incentives to just use big tech commercial services, rather than self host – leaning us further into things being run by a few, big companies.
Moreover, if a small personal site is distributing child porn or inciting terrorism, they’re not going to be caught because it doesn’t have a properly considered risk assessment ready to produce on demand – the eSafety Commissioner already has a range of other powers they can use in that case. They don’t have the resources to go after the countless small services out there for compliance issues, so all that will remain is the lingering chilling effects of these pointless requirements.
I get that most people will ignore these requirements, and the eSafety Commissioner is presumably relying upon that to give them the leeway to go after the people they need to target. I just think that creating laws that can be applied with so much discretion – where technically everyone is in violation, and the regulator can pick who they prosecute – is a shitty way to run a democracy.
Is it just me, or is “informational” a hole big enough to drive a truck through here? ↩
Notably, the site you’re reading this on doesn’t clearly qualify for any of them, and so when these codes are registered, I’ll likely be doing a risk assessment (and posting it), even though it doesn’t allow comments any more (because, spam). ↩
This seems to foretell the establishment of a new industry. ↩
Although it’s always tempting to write a blog entry that depicts, expresses or otherwise deals with matters of drug misuse or addiction in such a way that the material offends against the standards of morality, decency and propriety generally accepted by reasonable adults to the extent that the material should be classified RC. ↩
My preferred way of thinking of them these days, however, is as regulators. Just like the FTC in the US, the eSafety Commissioner in Australia, or the ICO in the UK, Standards Developing Organizations (SDOs) have a fundamentally regulatory aspect to them, and considering them in this way clarifies how they relate to Internet governance.
In particular, it helps to understand what kind of regulator they are, what tools they use, and the nature of the regime they operate within.
When most people think of a regulator, they assume it’s always state-backed; sovereign power (and hopefully a democratic mandate) imbues the regulator with legitimacy. As Julia Black put it back in 2002:
The core understanding that many have of ‘regulation’ is some form of ‘command and control’ (CAC) regulation: regulation by the state through the use of legal rules backed by (often criminal) sanctions. ‘CAC’ has also however become shorthand to denote all that can be bad about regulation, including poorly targeted rules, rigidity, ossification, under- or over- enforcement, and unintended consequences.
Modern conceptions of regulation are much more expansive (or ‘decentered’), encompassing not only public (government) regulation but also regulation by private actors. For example, lex mercatoria – commercial law and customs followed by merchants – goes back to at least medieval times, and is now considered a kind of regulation. States regularly defer to such ‘soft law’, and while it can always be overridden in a single jurisdiction by legal power, policymakers have strong motivations to avoid over-regulating areas that are capable of self-regulation.
Further complicating Internet regulation is its global span, which means that more than one state is involved. Transnational Private Regulators (TPRs) are non-government regulators who work across national boundaries.
Internet SDOs are often used as examples of TPRs. Other common examples include organisations like the Forestry Stewardship Council, the Fairtrade Foundation, the International Accounting Standards Board, and the ISEAL Alliance.
Caffagi identified a few factors that have “caused and helped to accelerate the emergence of TPRs”:
Importantly, the legitimacy (and therefore authority) of a TPR isn’t based on democracy – inherently they have no demos so they cannot be democratic in the sense that a state is. Instead, they draw on other sources of legitimacy, including their input (who participates), their output (what impact they have), and their throughput (what processes they use to assure fair and good outcomes).
The regulatory tools available to Internet SDOs are specific and limited – they write down technical specifications that, on a good day, get reflected in code.
This is ‘architectural regulation’, according to Lessig. It sits alongside other modalities of regulation like law, norms, and markets. Where the FTC uses law, the IETF uses architecture – shaping behaviour by limiting what is possible in the world, rather than imposing ex post consequences.
While much of regulatory theory and practice is taken up with issues like monitoring and enforcement, architectural regulation doesn’t need those tasks to be performed; the best approximation is conformance testing (which the IETF and W3C don’t formally do anyway; they certainly don’t attempt certification).
Another interesting aspect of this form of regulation is its quasi-voluntary nature. Internet standards are optional to adopt and implement; no one is forcing you to do so. However, if they’re successful and widely adopted, they do constrain your behaviour while you’re on the Internet, because everyone else is following them. In that sense, they are mandatory.
Architectural regulation of the Internet is also constrained in how it can introduce change. While a law can be repealed or overridden by a newer law, Internet protocol standards have to consider the dependencies that people already have on infrastructure; we can’t have a ‘flag day’ where we change how the Internet works. Instead, we have to carefully extend and evolve it, working within the constraints of what people already do, because once code is deployed, we lose control.
These features provide interesting advantages to SDOs as regulators. While one might see a non-state regulator without an enforcement problem as too powerful, standards’ lack of binding force means that an SDO can’t just impose its will; its product has to be proven by market adoption. A successful, widely adopted standard is (qualified) proof of cooperation, and thus has gained legitimacy at the same time it becomes binding.
If we step back from this, we can now consider the context of this regulation - Internet Governance overall. Plenty has been written about this that I won’t attempt to summarise, but there are a couple of aspects that I’d like to point out.
First of all, there are (obviously) other regulators present too – legal regulators especially (from various governments around the world), but also others using various combinations of the regulatory modalities.
Second, Internet Governance is polycentric (also referred to as ‘regulatory pluralism’) - there is no hierarchy and no regulator can tell another what to do. There are many sources of power (of various natures) that interact in different ways – sometimes reinforcing each other, occasionally conflicting.
Lessig talks about this (with ‘constraints’ being a synonym for ‘regulators’):
The constraints are distinct, yet they are plainly interdependent. Each can support or oppose the others. Technologies can undermine norms and laws; they can also support them. Some constraints make others possible; others make some impossible. Constraints work together, though they function differently and the effect of each is distinct. Norms constrain through the stigma that a community imposes; markets constrain through the price that they exact; architectures constrain through the physical burdens they impose; and law constrains through the punishment it threatens.
Third, the regulatory space is also fragmented. Information, authority, responsibility, and capacity to regulate are dispersed unevenly across multiple regulators. As Scott points out, ‘[r]elations can be characterized as complex, dynamic horizontal relations of negotiated interdependence.’
This means that no regulator in the space is truly independent. Standards have to operate in the legal contexts where they’re deployed; laws need to take the reality of the deployed Internet into account. Each party can act unilaterally, and might even meet their immediate goals, but the reaction to imprudent actions might be worse than the original issue they were trying to address.
Overall, this is healthy. Power is not concentrated in any one institution. States are able to claim sovereignty over what happens inside their borders, but if they differ too much from the global norm, they put at risk the economic and cultural benefits of being part of the global Internet.
Accepting the regulatory nature of SDOs leads to a few conclusions.
First, the IETF and W3C need to coordinate more closely with other regulators – especially national regulators who have their sights set on taming particular aspects of the Internet.
That doesn’t mean that SDOs should defer to national regulators – far from it. I’ve heard more than a few conversations where technical people think they need to implement the law in protocols. This is not the case, because laws are generally limited to a specific territory; countries can’t regulate the entire Internet by themselves. Furthermore, laws typically don’t apply to the standards themselves; instead, they apply to their use.
It doesn’t even mean that standards work should block on getting input from policymakers (just as policymakers don’t block lawmaking on feedback from SDOs!); doing so would introduce problematic incentives, muddy the technical decision-making process, and remove many of the advantages of private regulation.
It does mean that technical discussions should be informed by ‘policy considerations’, even if they’re ultimately dismissed. Understanding how legal regulators see the Internet, what their goals are, and how they attempt to use the regulatory tools in their hands helps technical regulators evaluate what additional constraints are likely to be layered onto the Internet. That might result in alignment between technical regulation and legal regulation, but this is emphatically not a requirement – in some cases, they might conflict.
Those conflicts should be avoided when they’re unnecessary, so SDOs need to do their part to inform legal regulators as well, particularly when their proposals have impact on the architecture.
This is not a new perspective – there has been considerable discussion in both the IETF and the W3C recently about ‘policy engagement.’ What’s different here is the emphasis on being a peer of other regulators, rather than automatically subject to them. That is fundamentally different than the relationship that most corporate policy units have with regulators, for example.
Second, this view reinforces the notion that regulation by technical standards bodies has very specific sources of legitimacy – the technical expertise that it embodies, and the demonstrated success of its output. That legitimacy might be enhanced by the unique global scope of these bodies – unlike national regulators, they are responsible for the entire Web and Internet.
That suggests the positions taken by these bodies need to be focused on their areas of expertise, rather than trying to draw on other sources of legitimacy (for example, pseudo-democratic ones, or notions of openness, although the latter does enhance their legitimacy). This is well-recognised in the IETF, where arguments like Pervasive Monitoring is an Attack are couched in technical terms, not value-driven ones.
Third, the polycentric and fragmented nature of the regulatory space suggests that it’s entirely appropriate for architectural regulators like SDOs to focus on areas where their tools are most effective.
For example, the HTTP Cookie specification has been working towards eradicating third-party cookies for some time, because they’re horrible for privacy. Some point out that this doesn’t address the privacy issues with first-party cookies - a site you’re interacting with can still track your activity, profile you, and so on.
That doesn’t mean that we should back away from regulating third-party cookies with architecture; they’re extremely amenable to this form of regulation (because of the user agency of the browser), and legal regulation of third-party cookies has proven difficult. On the other hand, regulating first-party privacy abuses on the Web with architecture is hard – if you interact with someone, you’re giving them your data – but legal regulation of how entities handle first-party data is on much firmer ground (provided there is a political will to do so).
]]>The strategic goal is clearest. We are vulnerable on mobile to Google and Apple because they make major mobile platforms. We would like a stronger strategic position in the next wave of computing. We can achieve this only by building both a major platform as well as key apps.
The interesting implication here is that he’s not worried about being vulnerable to platforms like the Internet and the Web, presumably because they’re ‘open’, and commodities – it isn’t easy to get into a dominant position on them (huge asterisks). That cuts both ways, of course; you’re not as vulnerable when you depend on an open platform, but it’s not as attractive for the platform owner either, as Zuckerberg hints later:
The platform vision is around key services that many apps use: identity, content and avatar marketplace, app distribution store, ads, payments and other social functionality. These services share the common properties of network effects, scarcity and therefore monetization potential.
The competition law student in me is super interested in these statements, but I’ll put that aside for now to point out how Zuckerberg is being a fairly sophisticated consumer of platforms at the exact same time he’s counting on his potential customers to lack such sophistication. He doesn’t want to get locked into other folks’ platforms, while he’s counting on normal people to get locked into his (‘network effects, scarcity, and therefore monetization’).
This attitude isn’t unique to Zuckerberg and Meta, of course. In all of the discussions of the ‘metaverse’, the common theme seems to be various companies trying to corner some aspect of the market so they can see the same benefits that he talks about, without getting locked into someone else’s platform. As I put it to Pew Research over a year ago:
The ‘metaverse’ is a marketing confection with no basis in reality as of yet. Its proponents are focused on capturing a future market, not building new shared space without any single owner. There are no current efforts at interoperability, common standards, open governance or any other sign of creating what is being marketed – a peer of the web as a public, open space. As a result, what little that is emerging is lacks novelty; we’ve seen it before (e.g., Second Life). If it plays any role in future online life, based on what we see today the metaverse is likely to be 3D Facebook, more or less – a platform that a big tech company uses to monetise attention, in a winner-take-all marketplace.
However, I think it’s a key factor in why these platforms should be regulated – their consumers aren’t sophisticated and need to be protected.
The importance of consumer sophistication can be illustrated by considering a platform where the users don’t need to be protected to the same degree – for example, cloud infrastructure-as-as-service.
There are definitely dominant players in the IaaS market, but as a consumer, I still have choices. If I don’t like AWS for compute,1 I can go to a fairly large number of competing services, with a reasonable selection of quality and pricing. IaaS offerings, in other words, are reasonably substitutable, and therefore form a competitive market; it’s harder for Amazon (for example) to abuse their power there.2
A large contributor to this state of affairs is the nature of the market’s consumers. Businesses are notoriously wary of being locked into a particular service provider; while they might dabble with a proprietary platform for a while, when things get serious, they want an exit plan. That is usually based upon Open Source or Open Standards.3
Let’s call IaaS a ‘business platform’ – a sizeable portion of its consumers are highly sophisticated (or at least highly wary), and that creates a force against abuses of that platform, and one, dare I say it, against centralization.
At the other end of the spectrum, then, would be a ‘consumer platform’ whose customers are not sophisticated – like the metaverse. Its users don’t think about the systemic effects of their actions, they just want the fun, new, shiny thing. The fact that a company packages it up and makes it easy to use is a bonus.
At the scale that consumer platforms operate at, it’s not reasonable to think that your actions could have any impact. If I don’t use Facebook, they don’t notice; I just lose out on some connectivity with my friends. Collective action is in theory possible, but the barriers to it are significant – especially when the most obvious coordinating function is the platform itself.
These problems don’t surface as often in business platforms. A large customer can demand that a platform be opened (in some fashion), and they’ll be listened to; I’ve seen it. Customers usually have some form of ‘relationship manager’ that listens to their concerns; try getting hold of support for a modern consumer platform.
Of course, this is not a binary; some consumer platforms have business users, and vice versa. Enough business users on a consumer platform might allow unsophisticated users to free ride on the influence that the sophisticated users exert; that probably happens on ‘pure’ business platforms too.
The environment for launching new platforms has changed dramatically in the last few years, and should continue to be… dynamic… for the foreseeable future. It’s gratifying to see that regulatory energy being focused on consumer platforms for all of the reasons above. Competition regulators often have an additional mission of protecting consumers, so this is entirely appropriate.
However, bad regulation can cause considerable harm. As I’ve mentioned before, I have concerns about ill-considered regulation fragmenting the Internet, thereby reducing its intrinsic value, and I have concerns about ossification, where big tech players are ‘locked into’ their current roles, even though evolution is one of the most important properties of the Internet.
That’s one reason I’m so interested in interoperability standards. I continue to believe that platforms that are developed, maintained and governed in the open, with a focus on the public good and broad stakeholder participation – e.g., through open standards – are most likely to benefit their users, and an important part of the regulatory mix in Internet governance. There is a lot more ground to cover in this area, and I’m hoping to write more about it soon.
My draft submission to the Australian Senate’s Inquiry on the Influence of International Digital Platforms touches on another aspect that’s important, but often ignored by regulators:
[T]he current metaverse proposals do not have the concept of a ‘user agent’ — a component charged with representing the interests of the end user when they interact with others. In the Web architecture, the browser fills this role, and while there are still significant problems on the Web, this separation of concerns has prevented many abuses.
Funnelling a platform through this kind of barrier function where the users’ interests are protected can prevent many - but not all - harms. However, that can’t be provided by a single company; the power differential between a well-resourced big tech company and individual, unsophisticated and poorly coordinated users is just too great.
As a result, my current thinking is that regulators should explicitly state a presumption that a consumer platform which is controlled by a single undertaking is prone to abuse of market power and consumer harm, and therefore a target for monitoring (and potential enforcement), especially when that market becomes sizeable. Creating platforms which are open and which have separable user agent components are evidence that the platform is not controlled by a single undertaking, avoiding this presumption.
Doing so would send a clear signal: if your platform takes off, don’t assume you can keep all of the benefit to yourself without oversight, or that you’re in sole control of how it works.
Some will argue that this style of regulation will ‘hurt innovation’, of course. I disagree; openness has fostered an astounding amount of innovation on the Internet, and in many cases, the opposite has been seen: big tech has taken what was open and created proprietary replacements that have sucked the energy out of the community efforts (e.g., web feeds). Simply put, while competition has a place in improving the Internet, cooperation has an even larger role, and it’s too often ignored.
And, frankly, I don’t want to live in a society where pouring countless billions into a platform gives you automatic rule over people’s interactions, solely because no one else spent as much and you had the first mover / network effect advantage.
Of course, this isn’t a complete solution. Open platforms aren’t automatically equitable and user-focused; creating and maintaining them takes considerable resources. Regulators would need to follow up and put appropriate pressure on selected private consumer platforms, which implies they need to be well-resourced too.
Those are problems that can be solved. At the end, what I really want is for statements like Zuckerberg’s above to be considered anachronistic and unworkable by future innovators – I want breathing room for the next big open platform to emerge.
It’s true that if I adopt more value-added and specific services, I’m more likely to get locked in. That’s a different discussion. ↩
Of course there are all sorts of specific billing and other tricks they can play to make it harder to switch; I’m discussing the market features, not their specific behaviour. ↩
Tim Bray talks about this more in ‘Lock-in and Multi-Cloud’. ↩