In this article, Director of Research Morgan Begg contextualises and disseminates the IPA’s research on misinformation laws and how it affects the political freedom of mainstream Australians.
The resounding rejection by the Australian people of the Voice to Parliament is being met not with reflection or humility by the Federal Government, but with a promise to censor online political communication to ensure Australians don’t make the same mistake again.
In a telling moment, days after the referendum was held, Prime Minister Anthony Albanese told parliament that “people can be subject to misinformation which in some cases is just about politics but in some cases can be dangerous … you [want] to make sure that elections can be held and democratic processes can be held in an appropriate way”.
This followed days of recriminations from pro-voice advocates and politicians who attributed the referendum result to people being hoodwinked by misinformation from the No campaign.
This narrative is now the context for draft misinformation and disinformation legislation being refined by federal public servants, which would confer extraordinary powers on the Australian Communications and Media Authority and require big tech platforms to censor users.
Former chairman of the Australian Competition and Consumer Commission Rod Sims recently argued the Federal Government’s proposed misinformation laws “will help Australians better understand what is disinformation and what is simply divergent views”.
This perspective, however, appears to be inconsistent with the views of ACMA.
In a 2021 report, ACMA argued that that online “misinformation” and “disinformation” are “relatively novel and dynamic phenomena” with “no established consensus on the definition of either term”.
The government’s legislative response, released in June, is of little help either.
The words used in the Bill do not refer to objective legal standards and are of such a broad nature it would be impossible for a reasonable person to know how the rules would be enforced over time.
This violates a basic principle of the rule of law, which is that the law must be knowable in order for it to be followed and applied impartially.
Take clause 7 of the Bill, which defines misinformation as false, misleading, or deceptive content that is likely to cause or contribute to “serious harm”.
Where the line is between serious and non-serious harm – if such a line exists – could not be known by reading the Bill.
And while the clause does provide a definition of harm, it is so broad as to provide little practical guidance.
For instance, the Bill seeks to outlaw harm being inflicted on matters ranging from “the integrity of democratic processes”, “the Australian environment”, and the “Australian economy”.
These provisions are so broad that practically any form of speech on any contentious public policy matter would likely be captured by the Bill.
In the absence of objective standards, Australians are asked to take on trust that these concepts would not be misused by unelected, and largely unaccountable, regulators – particularly as the legislation does not provide a right of appeal or review for individuals who are censored.
Beyond the Bill’s vagueness, a central claim made by proponents of misinformation laws is it will not be the Federal Government removing online speech, but it will be outsourced to foreign big tech platforms.
Per the legislation, the role of the ACMA will be to ensure that big tech platforms are maintaining and enforcing codes on misinformation.
As Mr Sims argues, it is about the “adequacy of systems, not content”.
While it is accurate to say that Part 3 of Schedule 9 of the Bill imposes obligations on big tech platforms to adopt codes on misinformation, these codes must be registered and enforced by ACMA.
In order to assess how the big tech platforms are fulfilling their obligations, ACMA will need to assess their “systems”.
ACMA will not be able to determine the adequacy of systems without also considering the content the systems are dealing with.
In other words, at the end of the day someone will need to determine if something written online is “misinformation” or not.
In the absence of clear definitions set out in legislation, that someone will be ACMA.
Whether the day-to-day censorship is carried out by ACMA or by big tech platforms is a distinction without a difference.
In the latter case, it is the private sector being used to carry out government censorship – something which has never been pursued in the peacetime history in Australia.
The indirect government model used in the Bill may actually be more censorious.
The substantial financial penalties to which big tech platforms would be liable when failing to comply with already-vague statutory obligations under the Bill—currently either $3.1 million or two per cent of the platform’s annual turnover per breach—would likely result in platforms playing it safe.
They would likely over-comply with legislation and censor more content than the law may actually require, in an effort to mitigate financial risk.
Significantly, there are penalties for failing to police misinformation, but no countervailing penalties for wrongly censoring content that is not misinformation.
The reaction by voice advocates following the referendum puts it beyond any doubt the proposed misinformation law is about controlling the information people are allowed to see.
The practical reality is the government would be given the power to pick and choose what content counts as “misinformation”.
This would dramatically undermine the freedom of Australians to exercise their basic democratic rights to participate in public debate about almost any issue.
This article draws from forthcoming and unpublished IPA Research on Misinformation.