Mark Nottingham

What's Missing in the ‘Agentic’ Story

Friday, 24 April 2026

Internet and Web

For much of the history of computing, it was reasonably safe to assume that a machine was doing what you told it to do (and what its creators promised it would do), because its operations were local.

You bought a laptop or desktop with an operating system, and it did what it said on the tin: it ran programs and stored files. You bought a spreadsheet and a word processor, and those programs performed those tasks and didn’t do anything else. Software that didn’t do this was in a separate bucket called ‘malware’ and we had ways of dealing with it.

That assumption has a more general precedent in tools – whether they be staplers, screwdrivers, or telescopes. When you buy a screwdriver, it turns screws; it has no agency of its own. Most everyday tools follow this pattern — my mechanical wristwatch can’t do anything but tell me the time.1

That pattern is perpetuated in most2 depictions of computers in fiction (especially sci-fi), which work for people diligently and always on their behalf, usually with minimal intrusion. They unambiguously act in the interest of their users — an expression of the technological optimism that influenced a generation of nerds who tried to build it.

All of these experiences combine to lead people to trust computers fairly unquestioningly; they don’t give much thought to the other purposes that might be served. When I use my phone, it’s my phone, and so it’s working for me, right? This is perpetuated in the press: recently, I saw an article in a major newspaper about how to talk to “your” AI agent.

If you scratch the surface just a bit, however, none of this is true when applied to modern technologies, and these assumptions are not safe.

The State of Trust on the Internet

Every time you use an Internet-connected computer, you’re trusting someone (and most likely, a multitude) to act on your behalf. From an application’s code all the way down to the silicon, software and hardware and the network services they use reliably embed the interests of those that create them – and they may or may not be aligned with yours.

Critically, those layers are usually – but not always – arranged in such a way that the interests of their producers and users are aligned. People creating computer chips are competing with other people creating chips, and so they focus on that; if they try to abuse their position by (say) exfiltrating your passwords in a side channel, the market (and possibly a legal regulator) will punish them.

However, modern businesses have become adept at exploiting the gaps in this arrangement. Now, a ‘smart’ watch tells the time more accurately — and may also be reporting your location, activities, and who knows what else back to its creator, who may pass it along. The same is true of every other app running.

Those abuses aren’t obvious, and it’s very easy for people to look at an Internet-connected device and fail to recognise that even though it’s “theirs” and that the data it processes is also “theirs”, they’re placing an inordinate amount of trust into a galaxy of faceless parties – trust that may not be deserved or protected. For example:

This is just a small selection; there are many more. All of these are stunning violations of trust. And, it’s becoming normal.

How did we get here? If I were to speculate on the reasons for that, I’d say it’s a combination of the normalisation of cloud computing (because everything is now running on or connected to computers you don’t control), the expectations of higher and higher growth and returns by investors, putting pressure on companies for new and recurring revenue, and – more than anything – the weakness of any regulating forces on these actors.

User Agents are a Form of Collective Bargaining

Although it’s difficult to trust anyone on the Internet given the examples above, it could be much, much worse. Imagine if you had to install a program on your computer from every company, government body, and other entity that you interact with, and those programs had full access to do what they like on your system. In other words, every online interaction becomes an opportunity to install malware that can extract your personal information, delete files or hold them ransom, profile and monitor your behaviour, and generally ignore your interests in favour of theirs.

What prevents that on the modern Internet? In many cases, it’s the humble Web browser, which selectively exposes capabilities to Web sites without offering full access to your computer. This is called a User Agent – software that acts on your behalf, representing your interests in your interactions with other parties.

While the browser represents your interests, it’s also balancing them against the sites you visit — it’s an agent for them too. Sites want predictable rendering; users want accessibility tools. Users don’t want to be tracked; sites need some signal of how their pages are consumed. These tradeoffs are negotiated in the open, through standards bodies like the W3C and IETF — and because there’s more than one browser, you can pick the agent that best represents you, which creates market pressure to do so.

Crucially, this gives every user the same deal. Forced to negotiate site-by-site, individuals would lose: sites have far more bargaining power than any one of us, and we’d give up out of exhaustion (cookie banners are the proof). A browser instead negotiates on behalf of users collectively, and embeds what is effectively a global treaty between sites and users.

Browsers aren’t perfect — fights over DRM and tracking show real disagreement about where the balance lies, individual implementations get it wrong (Google kept users’ data from Chrome’s private browsing mode), and as I’ve argued before, browsers show a distinct lack of ambition in creating higher-level abstractions.

Despite those shortcomings, Web browsers are a good example of how user agency should be done. There are other platforms that aspire to represent users’ interests – for example, iOS and Android. These, however, are single implementations where all of the decisions are made opaquely by a lone corporation. The checks and balances on their power are very limited and very different to those on Web browsers.

Why AI Needs User Agency

It’s notoriously difficult to predict how Large Language Models are going to change the world in the long term. That said, everyone is excited about the possibility of ‘agentic’ AI, with many breathlessly predicting that it will transform, well, everything. Briefly, the idea is that an LLM with tool capabilities can act on your behalf – i.e., be your agent.

The models of agency being discussed here are relatively simplistic, when you compare them to Web browsers. That’s largely because there’s no single definition of what an AI agent or chatbot does and does not do – it’s just a concept at this point. As a result, unless you write your own agent (or have AI do it for you), you’re using software that bargains on your behalf without any of the checks, balances, or collective leverage a browser provides. It claims to work for you; you have little assurance it does.3

That lack of trustworthiness cuts both ways. The data and services that the agent consumes have little visibility into how they will be used, because the agent could be doing anything – unlike a Web browser, which puts some rough guide rails around how a Web site’s data is used and creates expectations about capabilities and behaviour.

In other words, AI today has no well-defined user agent role — no transparent standards, no checks and balances on either side of the interaction. That gap makes it harder for a marketplace to form.

Agentic AI can still find a place without a user agent role. Agents in limited domains with assumed trust – like inside enterprises and with their third-party vendors – will likely thrive, because the contractual relationships between those parties will regulate their behaviour. And of course, we’re already seeing accelerating adoption of AI chatbots for accessing information online, even though they are currently opaque and unconstrained.

Beyond those bounded contexts, though, the absence of a user agent role starts to bite. Using agents written by other people will require a leap of trust similar to that required when using Android or iOS – and it’s not clear whether the companies that will write them will be worthy of that trust, especially if they proliferate. Likewise, online data sources will be reluctant to trust random agents because they don’t know what will happen to the data – the agent could use it for the purpose they say they do and then dispose of it responsibly, or they could store it or republish it.

Some proposals assume that putting agentic code in a TEE or similar ‘jail’ will solve these problems. But sandboxing isn’t bargaining. If every agent can ask for intrusive permissions, we’ll be pestered into granting them; trust will be abused and eroded, and everyone loses.

Another alternative is to have AI experiences locked up in proprietary platforms. Consider, however, what kinds of experiences that will lead to:

It is no accident that Meta is interested in smart glasses. With built-in cameras, lenses that can display WhatsApp messages and speakers that direct sound straight to the ear, the devices only make it easier for users to share what they are up to on social media and follow what others are doing. For Meta, more time spent on its platforms means more ad revenue. Amazon would likewise be delighted to have its Echo speakers in every home and its glasses on every face to gather more data for its growing ad business and make it even easier to buy from its marketplace. And OpenAI would be well served if people ditched their screens and relied instead on a chatbot to handle their interactions with the digital world.

The Economist

Defining a user agent role for AI agents would also make agents more legible to legal regulation. With such a strong focus on “AI safety” by regulators today, an architecture that assured certain properties could be an important component of a solution in this space, not only creating more competition but also forestalling more onerous legal regulation.

Finally, although allowing AI agents to be anything promises lots of opportunities, placing constraints upon them not only helps users and services build trust in them, it also helps people more easily conceptualise what they do. Simply put, users are confused when technology offers too many choices. It’s understandable that industry doesn’t want to constrain agents this early, but open-endedness has a cost: most people don’t understand what’s happening when they use computers — nor should they have to — and an unconstrained agent gives them nothing to lean on.

What an AI User Agent Might Look Like

The problem with developing an AI UA now is that by nature, it has to put constraints on how AI is used, at a time when everyone is still exploring what AI is. Being an agent means carefully considering consequences and balancing the interests, and this is easy to get wrong.

Consider, for example, the Ring camera. Amazon thought it was unambiguously good to allow the police to use a network of cameras to find ‘bad guys’, and that turned out to be not just naive, but disastrously wrong. Allowing people to opt out was not sufficient to balance the interests here – what was lacking was a principled approach to rights in their architecture.

I suspect this is one of the reasons Apple is taking so long to enhance Siri. It’s easy to install OpenClaw and let it wreak havoc on your personal data (promoting what used to be malware into something people install willfully!); it’s a lot harder to build an ecosystem that respects user rights, creates market opportunities, and doesn’t burden the user with an avalanche of choices. If everyone is operating their own isolated and bespoke environment, we lose the collective power of agency – both for users and the market.

It might be that a whole new platform (whether from Apple, OpenClaw, or elsewhere) gets developed, or it might be that AI capabilities are organically added to the Web. Projects like A2UI also show some small steps in this direction.

In general, though, creating an agent role for AI – with all of the benefits to the user and market that brings – will require constraining the tools that it can call in a fashion that becomes ‘normal’, so that people can depend on how it behaves. That might involve standard tool APIs with appropriate constraints, permission models, sandboxing (TEE or otherwise), and much more.

All of these issues are currently swept up under the carpet of ‘security’ in many AI discussions. We need to start talking about them with more nuance. Security is a defensive posture; agency is a collective bargain.

But perhaps the most consequential – and hidden – aspect we should be considering is how we get to a common idea of an AI platform – including user agency. Will it be like the major mobile platforms, controlled by private and well-intentioned but self-interested and conflicted actors – with almost inevitable competition and consumer regulation following? Or will it be a publicly accountable (and inevitably messy and laggy) process, like the Web?

  1. And date, and perhaps other things, depending on how complicated it is

  2. Notable exceptions include 2001: A Space Odyssey

  3. Beyond that provided by legal protections such as contract and product liability. Comparing that to the regulation provided by architecture is something I’ll address in another post.