We have mentioned this before, but here is fuller information on the role which the Mark Zuckerberg company, Meta, is going to play in the voice referendum. We know that Facebook was involved in influencing the US 2020 election, if only by its banning of discussion of the revelations of the Hunter Biden laptop. “Meta has been preparing for this year’s voice to parliament referendum for a long time, leaning into expertise from previous elections.” No doubt.
Consequently, the present move by Meta, to be engaging in fact checking is extremely problematic, since we simply have no reason to trust their predominantly Left-wing biased fact checkers to be fair to positions they do not like. In a nutshell, this is in my opinion an attempt by Big Tech to influence an election through influence upon youth through social media. The No campaign, and the Liberal party need to challenge the stated role of Meta in court, beginning now. Who checks the fact checkers, and who watches the watchmen?
“Facebook and Instagram want to be “contributing to democracy” and not exacerbating harms surrounding the Indigenous voice referendum, the company’s Australian policy head has said, as the social media giant beefs up protections on misinformation, abuse and mental health before the national vote.
Meta, the parent company of the two apps, on Monday announced it would boost funding to factcheckers monitoring misinformation, activate global teams to locate and respond to potential “threats” to the referendum – including coordinated inauthentic behaviour – and form a partnership with ReachOut for mental health support to Aboriginal and Torres Strait Islander people. The company will also maintain transparency tools such as its ad library that tracks political spending.
“We are also coordinating with the government’s election integrity assurance taskforce and security agencies in the lead-up to the referendum,” said Mia Garlick, Meta’s director of public policy for Australia. “We’ve also improved our AI so that we can more effectively detect and block fake accounts, which are often behind this activity.
“Meta has been preparing for this year’s voice to parliament referendum for a long time, leaning into expertise from previous elections.”
In an exclusive interview with Guardian Australia, she said Meta would tap Australian knowledge to respond to abuse and hate speech.
“We have hate speech advisory groups and First Nations advisory groups giving insight and advice on issues they see on the ground,” she said.
“Building off our experience with the marriage equality postal survey and elections, unfortunately when a particular group is the focus of debate, vulnerable groups can feel more vulnerable.”
The health minister, Mark Butler, has voiced concern about the potential impacts on Indigenous people of the referendum debate; the e-safety commissioner and mental health agencies including Beyond Blue and Black Dog say they are already seeing increasing reports about abuse and racism online.
Social media experts have warned of the possibility of fake accounts or other forms of inauthentic activity being used during the referendum campaign.
Liberal senators last month claimed there was “a clear risk” the voice referendum “could be used as another vehicle to subvert Australia’s democracy”.
In a report of a Senate committee inquiring into the referendum, Liberal members wrote that while a foreign actor may not necessarily have an interest in the referendum’s result, foreign actors may be interested in “exacerbating existing tensions within Australian society as a means of undermining social cohesion and national unity, and of harming Australia’s democratic institutions and processes”.
Garlick said Meta’s teams investigating inauthentic behaviour and influence operations had not detected any such attempts in Australia but that the company would continue seeking to delete fake accounts.
Meta’s new micro-blogging app Threads, its answer to Twitter, is an “extension of Instagram” and follows all that platform’s community standards and safety rules, Garlick said.
The tech giant is also rolling out media literacy programs with Australian Associated Press, encouraging users to check their facts and learn how to judge the veracity of online information. It will also run a “safety school” training session for politicians, advocacy groups and charities in July, with information about Meta’s tools and policies.
“We want to make sure we’re ready for any potential issue or concern,” Garlick said. “We know people will use the apps we provide to debate and advocate.
“We want to make sure it’s happening in a way that contributing to democracy, and not causing people to feel they’re exposed to harmful content and hate speech.”
Leaders and groups behind the no campaign against the referendum have already had several online posts and ads flagged as “false information” by factcheckers. Garlick said posts ruled to be false would have their distribution significantly reduced, and be slapped with a warning label linking to a factcheck article.
“We want people to use tools to express themselves, connect, engage – but if you have concerns, it’s easy to report content for review or action,” she said.”