Fixes for a post-truth world

Now ranked as one of Australia’s key security threats, the race is on to limit the impact of coordinated climate disinformation campaigns.

Fixes for a post-truth world
source: TZP via Canva

The platforms once used to connect our communities are now blighted by division and disinformation, bringing darkness to the digital town square.

As false information on climate and renewable energy explodes across social media platforms powered by artificial intelligence, its impact on society has been ranked as one of Australia’s primary security threats.

Last month, the final report from the Australian Senate's select committee on information integrity on climate change and energy was tabled, delivering key findings as well as suggested solutions (see box below).

While the committee heard that misleading messaging around climate change was, in some cases, driven by authentic community concerns, the online conversation was increasingly being dominated by ideological, political and commercial narratives that aimed to delay climate action and protect established business models for sectors such as oil and gas.

"Overall, the deteriorating information ecosystem has significant impacts on the Australian policy landscape. . ."

The social media platforms hosting the content also featured in the report for their use of algorithms able to amplify falsehoods, a cycle worsened by the rise of "AI slopaganda" and astroturfing – the fake grassroots campaigns that are secretly funded by corporate interests.

"Overall, the deteriorating information ecosystem has significant impacts on the Australian policy landscape, with climate mis/disinformation confusing public understanding of climate science, reducing support for action on climate change, and delaying renewable energy projects – with the associated economic opportunity costs, particularly for regional areas," the Select Committee report said.

Democracy on the line

Christian Downie, Professor in the School of Regulation and Global Governance at the Australian National University, explained in a recent article that misinformation was defined as the spread of false information, regardless of whether there was an intent to harm or mislead.

"When an individual or organisation spreads misinformation with the intent to influence public opinion, this is known as disinformation," he wrote.

"Both are important. In democratic societies, public opinion is the link between what people want, their electoral behaviour, and what politicians do on their behalf."

Downie warned that if false narratives begin to warp or sever this link, our democratic societies could unravel. "We only have to look across the Pacific to the United States to see what this looks like in real time," he wrote.

The Select Committee heard evidence that both misinformation and disinformation had played a part in influencing public opinion on renewable energy infrastructure such as wind farms and batteries as well as electric vehicle adoption.

Committee testimony from a NSW farmer, as reported by RenewEconomy, described how a 500-kilowatt-hour community battery in Narrabri with strong local support was later blocked by the council following a misinformation campaign run on Facebook that called into question its safety.

Survivors of the 2019 Black Summer bushfires also revealed in the inquiry how misinformation about the causes of the devastating fires were spread online, dividing local communities and giving rise to victim blaming and abuse.

The Select Committee report confirmed it had heard "concerning reports of bushfire survivors, landholders, and community group members alike being subject to harassment, intimidation, physical abuse, and even death threats".

While often dressed up as legitimate community concerns, the committee report noted the growing use of astroturfing as a mainstream tactic "where so-called 'grassroots' campaigns were in fact highly coordinated and well financed, often with links to think tanks, commercial/corporate interests, lobby groups, donors or political parties".

Divide and profit

The digital platforms themselves were also found to play a critical role in the spread of false narratives across communities.

With automated bots and generative AI becoming increasingly adept at mimicking real human activity on social media, experts say tools for mass manipulation can be deployed at a relatively low cost for those seeking to sway public opinion on pivotal community issues such as renewable energy infrastructure.

This can be achieved by flooding the zone with false information prior to critical events such as elections and referendums to shift consensus, experts say.

Audrey Tang, a Taiwanese digital democracy expert who has served as the nation's first Minister of Digital Affairs, told ABC's The Matter Of Facts documentary that three quarters of the time, people see generative AI as more human than actual humans.

"It quickly becomes very easy to orchestrate entire villages of fake people that all look very real, shaping the political opinion in a very insidious way," Tang said.

Tang advocates for solving AI-based disinformation attacks on democracy by tapping into the collective wisdom of the people to recommend solutions. To acheive this, the Taiwanese government sent SMS messages to a random sample of the population to recruit a diverse civic jury.

"Every time something attacks society, society needs to very quickly make sense of it and determine what to do with it. I think this is a classic case of the people's collective wisdom is actually much better than the politician's instinct," Tang told the ABC.

To this end, a Taiwanese citizens' assembly had proposed several solutions for combatting deep-fake and disinformation content, with the broader aim of establishing more effective AI regulation, Tang said.

"Artificial intelligence has progressed much faster than our laws have been able to keep pace with, meaning it’s now child’s play to create highly realistic fabricated images, videos, and audio recordings."

Another regulatory approach gaining traction is the introduction of watermark legislation to help identify AI content. Two landmark approaches leading the charge are the European Union's AI Act and California's SB 942 (often called the California AI Transparency Act), reflecting growing global coordination of AI governance and industry standards.

AI-specific legislation aims to ensure that AI-generated images, video, and audio that meet specific criteria are marked to identify their synthetic origin. Different approaches by legislators in each region may include a combination of latent disclosures (hidden metadata) and manifest disclosures (visible watermarks), introducing a layered approach to AI transparency for generative AI platforms with large numbers of users.

Fake facts

The introduction of stricter laws around "deepfakes" that depict real people or events are also being prioritised globally, with legislation often targeting AI content creators as well as the generative AI platform.

While the focus so far has been on sexualised deepfake content, legislation is also expected to target deepfake content created to disrupt democratic processes prior to events such as elections and referendums.

In 2024, ACT senator David Pocock created AI-generated videos of the prime minister and opposition leader which he said should demonstrate the need for stronger AI legislation. The fake content, reported by the ABC, depicted the Prime Minister promising a total ban on gambling ads.

Pocock said the videos were created to highlight the ways AI could be misused in election campaigns. In November last year he went on to introduce a private Senator’s bill to tackle deepfake AI, titled "My Face, My Rights".

The proposed bill seeks to strengthen the eSafety Commissioner’s powers to respond to AI-generated harm, including the power to issue removal notices and formal warnings, and provide civil redress through the courts for those wrongfully depicted or exploited via deepfake material.

“Artificial intelligence has progressed much faster than our laws have been able to keep pace with, meaning it’s now child’s play to create highly realistic fabricated images, videos, and audio recordings,” Senator Pocock said. 

“We need to make sure people in our community can benefit from new technologies but are also protected from the very real harms they can inflict."

Although well received by safety and privacy advocates, the Senator's website indicates the major parties are yet to support the bill.

Select Committee recommendations:
The report from the Australian Senate Select Committee on Information Integrity in Climate Change and Energy made 21 recommendations in its report. However, the committee also reinforced calls for a nuanced approach that did not dismiss legitimate community concern or stifle public debate. Its recommendations include:
►The Australian government's adoption of the UN's Global Principles on Information Integrity and official endorsement of the Declaration on Information Integrity on Climate Change, which was launched at COP30 in Belem, Brazil. 
►That the government seek higher quality data from digital platforms, reporting to the Australian Communications and Media Authority with thematic breakdowns of their reporting inclusive of climate and energy data, denominator data, removal actions and paid advertising related to climate and energy. 
►That the government explore funding models for independent monitoring support to track hidden digital influence ecosystems and provide independent transparency and accountability of platforms.
►Strengthening digital and media literacy through the Australian curriculum and providing more funding and support for regional and independent media outlets.

Related story

A new era for Australian energy
While fuel shortages roll out across the globe, one NSW community has found a powerful reason for optimism.