[This is a brief overview and some curated highlights of an article published today in “The Atlantic” by Jonathan Haidt and Eric Schmidt.]
Haidt and Schmidt remind us that AI has already revolutionized the way we interact with technology, and suggest that social media platforms are no exception. In fact, they contend, with AI, social media became an even more powerful tool for targeted advertising, personalized content, and quick responses to user requests. However, they argue that AI is about to make social media, as they say “(much) more toxic.”
They point out that AI amplifies confirmation bias, the tendency to seek out and believe information that confirms our preexisting beliefs. Social media algorithms are designed to show us content that we are more likely to engage with, due to their tracking of our previous behavior. If we have a particular political leaning or interest, we are more likely to see content that aligns with those beliefs. This is not new, news, but they note that as AI gets better at predicting what content we will engage with, it will become even harder to break free of our biases to consider alternative viewpoints.
They also argue that AI incentivizes extreme content. Social media platforms want to keep users engaged for as long as possible. One way to do that is to show the user content that is emotionally charged. AI can analyze users’ engagement patterns to identify the most effective types of content to keep them engaged. Unfortunately, promoting extreme content is fair game. If the goal is to “keep you captive,” your numbness must be breached! As a result of getting through the user’s filters, they may be more inclined to share their (extreme) content, leading to the potential for their community to normalize what is actually polarizing content that further pollutes and divides.
They also contend that AI fuels misinformation. Containing the spread of false information is a longstanding social platform management challenge but AI has made the problem even more complex. If engaged for the task, AI can quickly identify and amplify false information, especially if it is emotionally charged or aligns with a user’s preexisting beliefs. It can create deep fake videos and other content that is almost impossible to distinguish from reality. As AI gets better at creating highly-convincing false information, it will become even harder to distinguish truth (whatever that is these days) from highly engineered, manufactured fiction.
They offer five practical ideas to help reverse the trends: Authenticate all users, including bots; mark AI-generated audio and visual content; require data transparency with users, government officials, and researchers; clarify that platforms can sometimes be liable for the choices they make and the content they promote; and raise the age of “internet adulthood” to 16 and enforce it.
I recommend reading the full 3,500 words. You can find the article here https://www.theatlantic.com/technology/archive/2023/05/generative-ai-social-media-integration-dangers-disinformation-addiction/673940/