Mark Nottingham

What's Missing in the 'Agentic' Story

Friday, 24 April 2026

Internet and Web

For much of the history of computing, it was reasonably safe to assume that a machine was doing what you told it to do (and what its creators promised it would do), because its operations were local.

You bought a laptop or desktop with an operating system, and it did what it said on the tin: it ran programs and stored files. You bought a spreadsheet and a word processor, and those programs performed those tasks and didn’t do anything else. Software that didn’t do this was in a separate bucket called ‘malware’ and we had ways of dealing with it.

That assumption has a more general precedent in tools – whether they be staplers, screwdrivers, or telescopes. When you buy a screwdriver, it turns screws; it has no agency of its own. It might do other things, but that’s because you’re misusing the tool, not because it decided to do something else. Most things that people use unambiguously follow this pattern: for example, my mechanical wristwatch can’t do anything but tell me the time.1

That pattern is perpetuated in most2 depictions of computers in fiction (especially sci-fi), which work for people diligently and always on their behalf, usually with minimal intrusion. They unambiguously act in the interest of their users, following in the footsteps of technological optimism which informs much of fiction and influenced a generation of nerds who tried to build it.

All of these experiences combine to lead people to trust computers fairly unquestioningly; they don’t give much thought to the other purposes that might be served. When I use my phone, it’s my phone, and so it’s working for me, right? This is perpetuated in the press: recently, I saw an article in a major newspaper about how to talk to “your” AI agent.

If you scratch the surface just a bit, however, none of this is true when applied to modern technologies, and these assumptions are not safe.

The State of Trust on the Internet

Every time you use an Internet-connected computer, you’re trusting someone (and most likely, a multitude) to act on your behalf. From an application’s code all the way down to the silicon, software and hardware and the network services they use reliably embed the interests of those that create them – and they may or may not be aligned with yours.

Critically, those layers are usually – but not always – arranged in such a way that the interests of their producers and users are aligned. People creating computer chips are competing with other people creating chips, and so they focus on that; if they try to abuse their position by (say) exfiltrating your passwords in a side channel, the market (and possibly a legal regulator) will punish them.

However, modern businesses have become adept at exploiting the gaps in this arrangement. Now, if you use a ‘smart’ watch or your phone to check the time, it’s likely more accurate but you have to contend with the possibility that it’s reporting your location, activities, and who knows what else back to its creator – and that they might be sharing that information with others. And that’s also the case for every other application running.

Those abuses aren’t obvious, and it’s very easy for people to look at an Internet-connected device and fail to recognise that even though it’s “theirs” and that the data it processes is also “theirs”, they’re placing an inordinate amount of trust into a galaxy of faceless parties – trust that may not be deserved or protected. For example:

This is just a small selection; there are many more. All of these are stunning violations of trust. And, it’s becoming normal.

How did we get here? If I were to speculate on the reasons for that, I’d say it’s a combination of the normalisation of cloud computing (because everything is now running on or connected to computers you don’t control), the expectations of higher and higher growth and returns by investors, putting pressure on companies for new and recurring revenue, and – more than anything – the weakness of any regulating forces on these actors.

User Agents are a Form of Collective Bargaining

Although it’s difficult to trust anyone on the Internet given the examples above, it could be much, much worse. Imagine if you had to install a program on your computer from every company, government body, and other entity that you interact with, and those programs had full access to do what they like on your system. In other words, every online interaction becomes an opportunity to install malware that can extract your personal information, delete files or hold them ransom, profile and monitor your behaviour, and generally ignore your interests in favour of theirs.

What prevents that on the modern Internet? In many cases, it’s the humble Web browser, which selectively exposes capabilities to Web sites without offering full access to your computer. This is called a User Agent – software that acts on your behalf, representing your interests in your interactions with other parties.

And while the Web browser is representing your interests, it’s also balancing them with the interests of the sites that you visit – it’s an agent for them too. They want the page to render in a predictable way, but some users want to use accessibility tools. People don’t want to be tracked, but sites need some indication of how their pages are consumed. For the Web, all of these delicate tradeoffs are made within a framework of shared principles and values and decided in transparent fora using consensus processes – namely, the relevant standards bodies (usually, the W3C or IETF). There’s also more than one Web browser, so you can choose the agent that best represents your interests – thereby creating market pressure to do so.

Importantly, this is done in a way that results in the same deal for everyone. If you had to negotiate what Web sites are allowed to do on your computer on a case-by-case basis, you’d quickly give up out of exhaustion (and indeed, we see this in cookie banners, a notable failure). In the bargain between big sites and individual users, the sites have more bargaining power and therefore users’ interests need to be considered holistically – not on a case-by-case basis where sites can chip away at them. A browser embeds what is effectively a global treaty between sites and users.

That’s not to say that Web browsers are perfectly aligned with users’ interests; the fights over DRM and advertising/tracking show that there’s disagreement on what the right balance is, or even on what those interests are. User agents can also just get it wrong; for example, Google kept users’ data from private browsing mode in Chrome.

As I’ve argued before, Web browsers also show a distinct lack of ambition. While they protect the data and capabilities on your computer, and (mostly) isolate Web sites from each other, they don’t work hard enough to protect the data you give to sites by creating higher-level capabilities.

Despite those shortcomings, Web browsers are a good example of how user agency should be done. There are other platforms that aspire to represent users’ interests – for example, iOS and Android. These, however, are single implementations where all of the decisions are made opaquely by a lone corporation. The checks and balances on their power are very limited and very different to those on Web browsers.

Why AI Needs User Agency

It’s notoriously difficult to predict how Large Language Models are going to change the world in the long term. That said, everyone is excited about the possibility of ‘agentic’ AI, with many breathlessly predicting that it will transform, well, everything. Briefly, the idea is that a LLM with tool capabilities can act on your behalf – i.e., be your agent.

Putting aside the question of where we’re at in the hype cycle, the models of agency being discussed here are relatively simplistic, when you compare them to Web browsers. That’s largely because there’s no single definition of what an AI agent or chatbot does and does not do – it’s just a concept at this point. As a result, unless you write your own agent (or have AI do it for you), you’re using a piece of software that embeds others’ interests without much accountability, checks or balances. While it claims to work for you, you have little assurance that it’s actually doing so.3

That lack of trustworthiness cuts both ways. The data and services that the agent consume have little visibility into how they will be used, because the agent could be doing anything – unlike a Web browser, which puts some rough guide rails around how a Web site’s data is used and creates expectations about capabilities and behaviour.

In other words, the lack of a well-defined user agent role in AI that’s backed up by transparent, public standards that embed checks and balances on both parties to an interaction leaves a gap – it makes it harder for a marketplace to form.

That’s not to say that there isn’t a place for agentic AI without a well-defined concept of a user agent role. Agents in limited domains that have assumed trust – like inside enterprises and with their third-party vendors – will likely thrive without one, because the contractual relationships between those parties will regulate their behaviour. And of course, we’re already seeing accelerating adoption of AI chatbots for accessing information online, even though they are currently opaque and unconstrained.

However, that will limit the usefulness and application of agentic AI. Using agents written by other people will require a leap of trust similar to that required when using Android or iOS – and it’s not clear whether the companies that will write them will be worth of that trust, especially if they proliferate. Likewise, online data sources will be reluctant to trust random agents because they don’t know what will happen to the data – the agent could use it for the purpose they say they do and then dispose of it responsibly, or they could store it or republish it.

Some proposals for AI agents assume that putting agentic code in a TEE or similar ‘jail’ will solve these problems, but that ignores the need to collectively bargain – if agents can ask for intrusive permissions, we’re pretty much guaranteed a world where they constantly bug us for them, and everyone will lose out in that environment, because trust will be regularly abused and thus eroded.

Another alternative is to have AI experiences locked up in proprietary platforms. Consider, however, what kinds of experiences that will lead to:

It is no accident that Meta is interested in smart glasses. With built-in cameras, lenses that can display WhatsApp messages and speakers that direct sound straight to the ear, the devices only make it easier for users to share what they are up to on social media and follow what others are doing. For Meta, more time spent on its platforms means more ad revenue. Amazon would likewise be delighted to have its Echo speakers in every home and its glasses on every face to gather more data for its growing ad business and make it even easier to buy from its marketplace. And OpenAI would be well served if people ditched their screens and relied instead on a chatbot to handle their interactions with the digital world.

The Economist

Defining a user agent role for AI agents would also make agents more legible to legal regulation. With a such strong focus on “AI safety” by regulators today, an architecture that assured certain properties could be an important component of a solution in this space, not only creating more competition but also forestalling more onerous legal regulation.

Finally, although allowing AI agents to be anything promises lots of opportunities, placing constraints upon them not only helps users and services build trust in them, it also helps people more easily conceptualise what they do. Simply put, users are confused when technology offers too many choices. It’s understandable that industry doesn’t want to constrain the options for agents at this early point in their development, but at some point that wide open nature is going to hurt more than help. The vast majority of people don’t understand what’s happening when they use computers, nor should they be expected to.

What an AI User Agent Might Look Like

The problem with developing an AI UA now is that by nature, it has to put constraints on how AI is used, at a time when everyone is still exploring what AI is. Being an agent means carefully considering consequences and balancing the interests, and this is easy to get wrong.

Consider, for example, the Ring camera. Amazon thought it was unambiguously good to allow the police to use a network of cameras to find ‘bad guys’, and that turned out to be not just naive, but disastrously wrong. Allowing people to opt out was not sufficient to balance the interests here – what was lacking was a principled approach to rights in their architecture.

I suspect this is one of the reasons Apple is taking so long to enhance Siri. It’s easy to install OpenClaw and let it wreak havoc on your personal data (promoting what used to be malware into something people install willfully!); it’s a lot harder to build an ecosystem that respects user rights, creates market opportunities, and promotes a healthy ecosystem that doesn’t burden the user with an avalanche of choices. If everyone is operating their own isolated and bespoke environment, we lose the collective power of agency – both for users and the market.

It might be that a whole new platform (whether from Apple, OpenClaw, or elsewhere) gets developed, or it might be that AI capabilities are organically added to the Web. Projects like A2UI also show some small steps in this direction.

In general, though, creating an agent role for AI – with all of the benefits to the user and market that brings – will require constraining the tools that it can call in a fashion that becomes ‘normal’, so that people can depend on how it behaves. That might involve standard tool APIs with appropriate constraints, permission models, sandboxing (TEE or otherwise), and much more.

All of these issues are currently swept up under the carpet of ‘security’ in many AI discussions. We need to start talking about them with more nuance. Security is a defensive posture; agency is a functional right.

But perhaps the most consequential – and hidden – aspect we should be considering is how we get to a common idea of an AI platform – including user agency. Will it be like the major mobile platforms, controlled by private and well-intentioned but self-interested and conflicted actors – with almost inevitable competition and consumer regulation following? Or will it be a publicly accountable (and inevitably messy and laggy) process, like the Web?

  1. And date, and perhaps other things, depending on how complicated it is

  2. Notable exceptions include 2001: A Space Odyssey

  3. Beyond that provided by legal protections such as contract and product liability. Comparing that to the regulation provided by architecture is something I’ll address in another post.