Mark Nottingham

The Internet Isn’t Facebook: How Openness Changes Everything

Friday, 20 February 2026

Tech Regulation Web and Internet

“Open” tends to get thrown around a lot when talking about the Internet: Open Source, Open Standards, Open APIs. However, one of the most important senses of the Internet’s openness doesn’t get discussed as much: its openness as a system. It turns out this has profound effects on both the Internet’s design and how it might be regulated.

This critical aspect of the Internet’s architecture needs to be understood more now than ever. For many, digital sovereignty is top-of-mind in the geopolitics of 2026, but some conceptions of it treat openness as a bug, not a feature. The other hot topic – regulation to address legitimately-perceived harms on the Internet – can put both policy goals and the value we get from the Internet at risk if it’s undertaken in a way that doesn’t account for the openness of the Internet. Properly utilised, though, the power of openness can actually help democracies contribute to the Internet (and other technologies like AI) in a constructive way that reinforces their shared values.

Open and Shut

Most often, people think and work within closed systems – those whose boundaries are fixed, where internal processes can be isolated from external forces, and where power is concentrated hierarchically. That single scope can still embed considerable complexity, but the assumptions that its closed nature allows make certain skills, tools, and mindsets advantageous. This simplification helps compartmentalise effects and reduces interactions; it’s easier when you don’t have to deal with things you don’t (and can’t) know, much less control.

Many things we interact with daily are closed – for example, a single company, a project group, or even a legal jurisdiction. The Apple App Store, air traffic control, bank clearing systems, and cable television networks are closed; so are many of the emerging AI ecosystems.

The Internet is not like that.

That’s because it’s not possible to know or control all of the actors and forces that influence and interact with the Internet. New applications and networks appear daily, without administrative hoops; often, this is referred to as “permissionless innovation,” which allowed things the Web and real-time video to be built on top of the network without asking telecom operators for approval. New protocols and services are constantly proposed, implemented and deployed – sometimes through an SDO like the IETF, but often without any formal coordination.

This is an open system, and it’s important to understand how that openness constrains the nature of what’s possible on the Internet. What works in a closed system falls apart when you try to apply it to the Internet. Openness as a system makes introducing new participants and services very easy – and that’s a huge benefit – but that open nature makes other aspects of managing the ecosystem very different (and sometimes difficult). Let’s look at a few.

Designing for Openness

Designing an Internet service like an online shop is easy if you assume it’s a closed ecosystem with an authority that ‘runs’ the shop. Yes, you have to deal with accounts, and payments, and abuse, and all of the other aspects, but the issues are known and can be addressed with the right amount of capital and a set of appropriate professionals.

For example, designing an open trading ecosystem where there is no single authority lurking in the background and making sure everything runs well is an entirely different proposition. You need to consider how all of the components will interact and at the same time assure that none is inappropriately dominated by a single actor or even a small set, unless there are appropriate constraints on their power. You need to make sure that the amount of effort needed to join the system is low, while at the same time fighting the abusive behaviours that leverage that low barrier, such as spam.

This is why regulatory efforts that are focused on reforming currently closed systems – “opening them up” by compelling them to expose APIs and allow competitors access to their systems – are unlikely to be successful, because those platforms are designed with assumptions that you can’t take for granted when building an open system. I’ve written previously about Carliss Baldwin’s excellent work in this area, primarily from an economic standpoint. An open system is not just a closed one with a few APIs grafted onto it.

For example, you’re likely to need a reputation system for vendors and users, but it can’t rely on a single authority making judgment calls about how to assign reputation, handle disputes, and so forth. Instead, you’ll want to make it more modular, where different reputation systems can compete. That’s a very different design task, and it is undoubtedly harder to achieve a good outcome.

At the same time, an open system like the Internet needs to be more pessimistic in its assumptions about who is using it. While closed systems can take drastic steps like excluding bad actors from them, this is much more difficult (and problematic) in an open system. For example, a closed shopping site will have a definitive list of all of its users (both buyer and seller) and what they have done, so it can ascertain how trustworthy they are based upon that complete view. In an open system, there is no such luxury – each actor only has a partial view of the system.

Introducing Change in Open Systems

An operator of a proprietary, closed service like Amazon, Google, or Facebook has a view of its entire state and is able to deploy changes across it, even if they break assumptions its users have previously relied upon. Their privileged position gives them this ability, and even though these services run on top of the Internet, they don’t inherit its openness.

In contrast, an open system like e-mail, federated messaging, or Internet routing is much harder to evolve, because you can’t create a list of who’s implementing or using a protocol with any certainty; you can’t even know all of the ways it’s being used. This makes introducing changes tricky; as is often said in the IETF, you can’t have a protocol ‘flag day’ where everyone changes how they behave at the same time. Instead, mechanisms for gradual evolution (extensibility and versioning) need to be carefully built into the protocols themselves.

The Web is another example of an open system.1 No one can enumerate all of the Web servers in the world – there are just too many, some hidden behind firewalls and logins. There are whole social networks and commerce sites that you’ve never heard of in other parts of the world. While search engines make us feel like we see the whole Web (and have every incentive to make us believe that), it’s a small fraction of the real thing that misses the so-called ‘deep’ Web. This vastness is why browsers have to be so conservative in introducing changes, and why we have to be so careful when we update the HTTP protocol.

Governing Open Systems

Openness also has significant implications for governance. Command-and-control techniques that work well when governing closed systems are ineffective on an open one, and can often be counterproductive.

At the most basic level, this is because there is no single party to assign responsibility to in an open system – its governance structure is polycentric (i.e., has multiple and often diffuse centres of power). Compounding that effect is the fact that large open systems like the Internet span multiple jurisdictions, so a single jurisdiction is always going to be playing “whack-a-mole” if it tries to enforce compliance on one party. As a result, decisions in open systems tend to take much more time and effort than anticipated if you’re used to dealing with closed, hierarchical systems.

On the Internet, another impact of openness is seen in the tendency to create “building block” technology components that focus on enabling communication, not limiting it. That means that they are designed to support broad requirements from many kinds of users, not constrain them, and that they’re composed into layers which are distinct and separate. So trying to use open protocols to regulate behaviour of Internet users is often like trying to pin spaghetti to the wall.

Consider, for example, the UK’s attempts to regulate user behaviour by regulating lower-layer general-purpose technologies like DNS resolvers. Yes, they can make it more difficult for those using common technology to do certain things, but actually stopping such behaviour is very hard, due to the flexible, layered nature of the Internet; determined people can do the work and use alternative DNS servers, encrypted DNS, VPNs, and other technologies to work around filters. This is considered a feature of a global communications architecture, not a bug.

That’s not to say that all Internet regulation is a fools’ errand. The EU’s Digital Markets Act is targeting a few well-identified entities who have (very successfully) built closed ecosystems on top of the open Internet. At least from the perspective of Internet openness, that isn’t problematic (and indeed might result in more openness).

On the other hand, the Australian eSafety Regulator’s effort to improve online safety – itself a goal not at odds with Internet openness – falls on its face by applying its regulatory mechanisms to all actors on the Internet, not just a targeted few. This is an extension of the “Facebook is the Internet” mindset – acting as if the entire Internet is defined by a handful of big tech companies. Not only does that create significant injustice and extensive collateral damage, it also creates the conditions for making that outcome more likely (surely a competition concern). While these closed systems might be the most legible part of the Internet to regulators, they shouldn’t be mistaken for the Internet itself.

Similarly, blanket requirements to expose encrypted messages have the effect of ‘chasing’ criminals to alternative services, making their activity even less legible to authorities and severely impacting the security and rights of law-abiding citizens in the process. That’s because there is no magical list of all of the applications that use encryption on the Internet: instead, regulators end up playing whack-a-mole. Cryptography relies on mathematical concepts realised in open protocols; treating encryption as a switch that companies can simply turn off misses the point.

None of this is new or unique to the Internet; cross-border institutions are by nature open systems, and these issues come up often in discussions of global public goods (whether it is oceans, the climate, or the Internet). They thrive under governance that focuses on collaboration, diversity, and collective decision-making. For those that are used to top-down, hierarchical styles of governance, this can be jarring, but it produces systems that are far more resilient and less vulnerable to capture.

Why the Internet Must Stay Open

If you’ve read this far, you might wonder why we bother: if openness brings so many complications, why not just change the Internet so that it’s a simpler, closed system that is easier to design and manage? Certainly, it’s possible for large, world-spanning systems to be closed. For example, both the international postal and telephony systems are effectively closed (although the latter has opened up a bit). They are reliable and successful (for some definition of success).

I’d argue that those examples are both highly constrained and well-defined; the services they provide don’t change much, and for the most part new participants are introduced only on one ‘side’ – new end users. Keeping these networks going requires considerable overhead and resources from governments around the world, both internally and at the international coordination layer.

The Internet (in a broader definition) is not nearly so constrained, and the bulk of its value is defined by the ability to introduce new participants of all kinds (not just users) without permission or overhead. This isn’t just a philosophical preference; it’s embedded in the architecture itself via the end-to-end principle. Governing major aspects of the Internet by international treaty is simply unworkable, and if the outcome of that agreement is to limit the ability of new services or participants to be introduced (e.g., “no new search engines without permission”), it’s going to have a material effect on the benefits that humanity has come to expect from the Internet. In many ways, it’s just another pathway to centralization.

Again, all of this is not to say that closed systems on top of the Internet shouldn’t be regulated – just that it needs to be done in a way that’s mindful of the open nature of the Internet itself. The guiding principle is clear: regulate the endpoints (applications, hosts, and specific commercial entities), not the transit mechanisms (the protocols and infrastructure). From what’s happened so far, it looks like many governments understand that, but some are still learning.

Likewise, the many harms associated with the Internet need both technical and regulatory solutions; botnets, DDoS, online abuse, “cybercrime” and much more can’t be ignored. However, solutions to these issues must respect the open nature of the Internet; even though their impact on society is heavy, the collective benefits of openness – both social and economic – still outweigh them; low barriers to entry ensure global market access, drive innovation, and prevent infrastructure monopolies from stifling competition.

Those points acknowledged, I and many others are concerned that regulating ‘big tech’ companies may have the unintended side effect of ossifying their power – that is, blessing their place in the ecosystem and making it harder for more open systems to displace them. This concentration of power isn’t an accident; commercial entities have a strong economic incentive to build proprietary walled gardens on top of open protocols to extract rent. For example, we’d much rather see global commerce based upon open protocols, well-thought-out legal protections, and cooperation, rather than overseen (and exploited) by the Amazon/eBay/Temu/etc. gang.

Of course, some jurisdictions can and will try to force certain aspects of the Internet to be closed, from their perspective. They may succeed in achieving their local goals, but such systems won’t offer the same properties as the Internet. Closed systems can be bought, coerced, lobbied into compliance, or simply fail: their hierarchical nature makes them vulnerable to failures of leadership. The Internet’s openness makes it harder to maintain and govern, but also makes it far more resilient and resistant to capture.

Openness is what makes the Internet the Internet. It needs to be actively pursued if we want the Internet to continue providing the value that society has come to depend upon from it.

Thanks to Konstantinos Komaitis for his suggestions.

  1. Albeit one that is the foundation for a number of very large closed systems.