The 2023 Voice to Parliament Referendum was characterised by the censorship and regulation of public debate in the name of combatting ‘misinformation’. In partnership with social media companies, ‘fact checking’ organisations assessed the ‘truth’ of claims made about the referendum proposal and sought to stop the circulation of claims found to be ‘false’ or ‘misleading’.
The three fact checking organisations in Australia—Australian Associated Press (AAP) FactCheck, RMIT FactLab, and RMIT ABC Fact Check—are all signatories to the International Fact-Checking Network’s (IFCN) Code of Principles. These principles mandate that signatories uphold commitments to fairness and impartiality including that they ‘not concentrate their fact-checking unduly on any one side’.
An analysis of articles published by fact checkers relating to the Voice to Parliament referendum finds the fact checking organisations were manifestly biased, and disproportionately scrutinised opponents and critics of the Voice. The results are as follows:
- 187 fact checking articles related to the Voice were published between 22 May 2022 and 14 October 2023. 170 fact checks (91%) assessed claims made by opponents of the Voice. Only 17 (9%) assessed claims made by proponents of the Voice.
- Fact checkers were far more likely to find claims by the No case to be false than the Yes case. Of the articles targeting the No case, 99%were deemed false, whereas only 59% of the comparatively few articles assessing the Yes case were deemed false.
- Significantly, 8 of the 17 (47%) articles assessing the veracity of claims made by the Yes case were published after the Sky News exposé, the Fact Check Files. This suggests that even the limited scrutiny of Voice advocates was motivated by a desire to appear balanced, rather than any genuine commitment to objectivity.
- In addition to targeting one side of the referendum debate, the subjective standards applied by fact checkers meant that any opinion or statement that was contrary to the biases of the fact checkers would likely be deemed ‘false’.
The findings of this research further highlight the dangers associated with the federal government’s proposed Communication Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023. Under the proposed laws, the major technology companies, such as Meta, would have legal obligations to remove material from their platforms that is assessed as falling into the categories of misinformation or disinformation. In practice, one of the mechanisms used by the technology companies to make this assessment is ‘fact checking’ undertaken by one of the three fact-checking organisations analysed in this report. The apparent pro-government bias of these fact checking organisations reinforces concerns the government’s misinformation laws could result in opinions being censored on the basis of their political content.