AI Set to Make Social Media Even More Toxic! By Brian Simpson
Jonathan Haidt and Eric Schmidt have set out another case for extreme caution about the rapid advances in general intelligence AI. They advance four reasons of concern, which can be summarised under one: that general AI will make social media even more toxic, especially for children:
1) AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation.
2) Personalized super-influencers will make it much easier for companies, criminals, and foreign agents to influence us to do their bidding via social media platforms.
3) AI will make social media much more addictive for children, thereby accelerating the ongoing teen mental illness epidemic.
4) AI will change social media in ways that strengthen authoritarian regimes (particularly China) and weaken liberal democracies, particularly polarized ones, such as the USA.
There are two responses to this by those that see a problem here. One is government regulation, which does not wash well worth the liberty lobby, that is, us. The other is free market capitalism; the market will decide wisely. But, that too is a problem, since there is no “free” market, and Big Tech has so much power that what it says, virtually goes. They offer some duct tape -style solutions which do not go to the heart of the problem:
- Authenticate all users, including bots
- Mark AI-generated audio and visual content
- Require data transparency with users, government officials, and researchers
- Clarify that platforms can sometimes be liable for the choices they make and the content they promote
- Raise the age of “internet adulthood” to 16 and enforce i.t
It is far better to battle for general public education and awareness of the threats, and maybe that will create a groundswell of opposition that in turn could impact upon markets, as was seen in the revolt against Bud Light recently.
“Well, that was fast. In November, the public was introduced to ChatGPT, and we began to imagine a world of abundance in which we all have a brilliant personal assistant, able to write everything from computer code to condolence cards for us. Then, in February, we learned that AI might soon want to kill us all.”
https://jonathanhaidt.substack.com/p/ai-will-make-social-media-worse
“We decided to write an essay together, joining his understanding of the technology with my research on social and moral psychology. We converged upon a short list of four imminent threats, all described in our essay:
1) AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation.
2) Personalized super-influencers will make it much easier for companies, criminals, and foreign agents to influence us to do their bidding via social media platforms.
3) AI will make social media much more addictive for children, thereby accelerating the ongoing teen mental illness epidemic.
4) AI will change social media in ways that strengthen authoritarian regimes (particularly China) and weaken liberal democracies, particularly polarized ones, such as the USA.
We then began talking about potential reforms that would reduce the damage. We both share a general wariness of heavy-handed government regulations when market-based solutions are available. Still, we saw that social media and AI both create collective action problems and market failures that require some action from governments, at least for setting rules of the road and legal frameworks within which companies can innovate. We workshopped a list of ideas with an MIT engineering group organized by Eric’s co-author Dan Huttenlocher (we thank Aleksander Madry, Asu Ozdaglar, Eric Fletcher, Gregory Dreifus, Simon Johnson, and Luis Videgaray), and with members of Eric’s team (thanks especially to Robert Esposito, Amy Kim, Eli Sugarman, Liz McNally, and Andrew Moore). We also got helpful advice from experts including Ravi Iyer, Renee di Resta, and Tobias Rose-Stockwell.
We ended up selecting five reforms aimed mostly at increasing everyone’s ability to trust the people, algorithms, and content they encounter online:
- Authenticate all users, including bots
- Mark AI-generated audio and visual content
- Require data transparency with users, government officials, and researchers
- Clarify that platforms can sometimes be liable for the choices they make and the content they promote
- Raise the age of “internet adulthood” to 16 and enforce it
I hope you’ll read the essay for our explanations of why these reforms are needed and how they could be implemented.
The arrival of social media in the early 2000s was our most recent encounter with a socially transformative technology that spread like wildfire with almost no regulation, oversight, or liability. It has proven to be a multidimensional disaster. Generative AI promises to be far more transformative and is spreading far more quickly. It has the potential to bring global prosperity, but that potential comes with the certainty of massive global change. Let’s not make the same mistake again. Liberal democracy and child development are easy to disrupt, and disruption is coming. Let’s get moving, this year, to protect both.”
Comments