mark nottingham

A Safer, More Centralised Australian Internet

Sunday, 11 September 2022

Australia Tech Regulation

There are many potential criticisms of the Online Safety Act 2021 (Cth)1. While my own concerns are mostly about whether there are appropriate checks and balances on the eSafety Commissioner’s powers, I will give credit where due; the current Commissioner’s implementation of it has – so far – demonstrated nuance and thoughtful balancing of the legislation’s goals with the preservation and enhancement of the unique properties that make the Internet so valuable to society.

So I was interested to see the announcement of the Industry Codes, which are guidelines for handling certain kinds of content (mostly, extreme violence and porn). They weren’t created by Parliament or the Commissioner; instead, representatives of different online industries across the country wrote them. If the Commissioner is satisfied, she will register them, and then they will then be enforceable against those industries.

This is a pretty typical pattern in regulation – rather than a top-down, command-and-control approach, the legislation co-opts the industry participants in their own regulation (sometimes called ‘meta-regulation’) to ensure that the outcome is aligned with the reality of their businesses and can be implemented.

The problem is that the legislation and the proposed codes assume the Internet is only industry – or at least the interesting bits are. It either marginalises or ignores non-commercial providers, so that the resulting regulation will heavily favour a commercialised, ‘big tech’ future for the Australian Internet, further entrenching those interests and increasing tendencies towards centralisation.

On the Internet, Industry is Everyone

Section 136 of the Act states that

if a person is a member of a group that constitutes a section of the online industry, the person is a participant in that section of the online industry.

Note ‘person’ here – even though the legislation calls it an industry code, application of the code isn’t restricted to commercial bodies or those of a certain size – the codes, once registered, apply to all people who provide the relevant services.2

The eSafety Commissioner anticipated this, encouraging risk profiles to be adapted to the situation of different participants. Unfortunately, while the proposed codes do create tiers of risk profiles, they heavily favour big Internet platform providers, to the detriment of small businesses, community groups, and individuals.

For example, the proposed Designated Internet Services Online Safety Code applies to essentially all Web sites as part of that ‘section’. There is a provision for excluding some sites from the requirement to perform a risk assessment:

A provider of a designated internet service is deemed to be a Tier 3 service and not required to conduct a risk assessment where: (i) the designated internet service is a general purpose website or app or a classified DIS.[.]

However, a ‘general purpose website’ is narrowly defined using a closed list:

[G]eneral purpose website or app means a designated internet service that primarily provides information for business, commerce, charitable, professional, health, reporting news, scientific, educational, academic research, government, public service or emergency service purposes and/or enables related transactions.

… while a ‘classified DIS’ adds ‘general entertainment, news, or educational content’ with certain conditions.

The Web does much more than provide those few categories of service. My own personal Web site does not clearly fit into that list, so a strict reading is that I’d be providing a Tier 2 service and so will need to undertake risk assessments, create various systems and processes, identify a trust and safety function, and so on.

Of course, that’s ridiculous, and I don’t think that it’s being suggested. What I do think is that the associations making these proposals have tunnel vision about what the Internet is and can be – they’re seemingly focused on technology being a corporate creation, despite the history of the Internet and even though the legislation makes the codes more widely applicable.

Merely adding new categories (like ‘personal web site’) to that list won’t help, because a key architectural feature of the Internet is permissionless innovation. New features and new kinds of services shouldn’t face barriers to entry – especially not ones that favour incumbent commercial concerns. For example, how would an online tool like REDbot be classified here?

Looking at the rest of the proposed codes, similar problems emerge. The Social Media Services Online Safety Code excludes from Tier 3 any online service with messaging, chat services, image sharing, or user profiles and connections (‘friends’). That means that local community discussion forums like Whirlpool will need to undertake expensive compliance efforts, both up front and on an ongoing basis. So will any local community effort that wants to use a message board, online forum, MUD or MOO.

This arrangement suits incumbent social media companies; more risk and cost (or the alternative, extremely limited functionality) for community-led and hosted solutions (often using Open Source) means more traffic will be driven to them, so they can monetise Australian community participation. Why would you go to the trouble of setting up a server when it’s safer (at least legally) to use Facebook?

That isn’t to say that small, non-commercial sites should escape all regulation for online safety – just that they shouldn’t be saddled with a compliance burden designed for commercial targets when the Commissioner has many other effective regulatory tools at her disposal. Also, we should recognise that non-commercial, community efforts are often much better at policing themselves than the free-for-all that is much of commercial social media.

I’ll give one more example of how myopic these proposals are (although there are more): the Equipment Online Safety Code requires a provider of an Operating System to

take part in an annual forum organised and facilitated by one of the industry associations responsible for the development of this Code […] to discuss and share relevant issues, advances and best practice in online safety with other industry participants.

and:

An OS provider must take reasonable steps to develop and implement tools within operating systems that allow Australian end-users to help reduce the risk of harm to children when using interactive (Tier 1) devices.

So, what does that mean for Linux and Linux distros? Do Australian kernel hackers need to be nervous now?

What Should Happen?

As written, these proposals create a significant compliance burden on non-commercial Web sites and Internet services. Beyond the obvious effects on freedom of expression and freedom of assembly, this is also a competition issue; by making it more risky and costly to use self-hosted community fora, big tech companies can be seen to be consolidating the market to their favour. These effects may not have been intentional, but when an industry forum that’s effectively writing regulation for the Internet is convened by groups of large Internet companies, rather than a broader community, it’s not a surprising result.

It’s also far from proportional. One analogy that comes to mind is food safety regulation that puts family get-togethers and dinners at scout camp under the same rules as an industrial catering kitchen.

While many individuals and non-commercial groups might simply ignore the regulatory busy-work that these proposals would create, trusting that the eSafety Commissioner would use appropriate discretion, the implied risks of relying on that are significant; 500 penalty units is currently AUD$111,000. The mere possibility of the large penalties that the legislation allows in combination with the high costs that compliance entails will discourage many from trying, especially when big tech platforms offer services for free (at least financially).

So what should happen? I would hope that the eSafety Commissioner would refuse to register these codes as written, and direct the authors to engage with the community more transparently, since ‘industry’ is really everyone. As the position paper explained:

Representation may be a matter of both breadth (representing different types of participants) and depth (representing a reasonable number of participants). This does not mean that every participant, or every type of participant, must necessarily be accounted for, but that there is sufficient representation of participants, such that the industry association could be said, broadly, to be speaking on behalf of the section as a whole.

Furthermore, the risk profiles should be substantially revamped, with an eye towards excluding non-commercial concerns from the Industry Codes, and assuring that they aren’t being used to create unreasonable regulatory burdens on startups and small competitors. In particular, automatically exclusion from the least risky tier based solely on what features are used (e.g., user profiles) fails to recognise other important factors like the nature of the service, its community and its size.

I’d also like to see the excellent folks at the ACCC engage with eSafety regarding the issues above. They have valuable experience balancing competition issues against other concerns, and competition is relevant here.

Finally, the Commissioner’s position paper also states that

larger and more mature services should be encouraged to assist in capacity-building of smaller and newer services.

These proposals utterly fail to do so. Surprised?

  1. See, eg, ‘Explainer: The Online Safety Bill’, Digital Rights Watch

  2. Arguably, the ‘member of a group’ clause might exclude a service run by one person on their own, but that’s not been tested, and still leaves churches, clubs, sporting groups, interest groups, social groups, and civil society groups squarely covered by this ‘industry’ code. Notably, the proposed codes’ smallest tiers start at zero members.