By Scott Hamilton

I came across an interesting article about the end of the Internet as we know it. If you don’t already know the history of the Internet, it started as a means of communication between researchers on academic campuses worldwide. The main purpose was for researchers to share ideas, papers and research topics. It grew from there to include online gaming, social media networks and online shopping. All of these changes have been great for making our world more accessible. We can know instantly what is happening anywhere in the world via the traffic on the Internet. However, there is new research suggesting that the Internet is no longer a useful place for sharing research and ideas because it has become overwhelmed with false information.

Director of Studies (Computer Science) at UNSW Sydney Jake Renzella and Research Fellow in Applied Machine Learning at the University of Melbourne Vlada Rozova released a paper in defense of the “dead internet theory” late last week. They based it off of a simple internet search for a series of images that have gone viral over the last few months on Facebook. The search phrase is “shrimp Jesus” which will result in finding dozens of images generated by artificial intelligence (AI) of the body of a shrimp merged with the stereotypical image of Jesus Christ. Some of these images have generated more than 20,000 likes and comments.

You might wonder, why does it matter, but the sad truth behind this particular viral meme is that not only was the meme generated by AI, but a majority of the likes and comments were also generated by AI. The “dead internet theory” makes the claim that AI and bot-generated content has surpassed the human-generated internet. So where does such a theory come from and does it have any real merit?

The dead internet theory goes beyond just claiming that the activity and content on the internet, including social media accounts, are created by automated AI agents. The theory also lays claims that these AI generated accounts are capable of engaging with one another, creating a cycle of artificial engagement. It is no longer necessary for a human to be involved at all in the creation, promotion, or interaction for a social media post to go viral.

As we begin to dig into the reasons why one would want to create such an environment of artificial engagement, we first turn to the advertising revenue. You see if you are a social media influencer you get a share of the advertising revenue generated by your posts, which is directly related to the number of views, likes and comments on your posts. This “proof” of interaction results in some fairly large payouts for the influencer. This would lead us to believe the main purpose behind creating an AI that generates realistic traffic to a social media account would be linked directly to this revenue stream. If only that were the truth, we would not have to worry as much about it.

The dead internet theory does not believe that this is the primary purpose of these AI bots. The theory claims that it is much more sinister. Beneath the surface of these AI bots lives a well-funded attempt to support regimes, attack opponents and spread propaganda. While the example of the “shrimp Jesus” is fairly harmless, imagine instead that the AI interaction was promoting a political candidate. As these AI-driven accounts grow in followers, the high follower count makes the AI account seem legitimate and people begin to trust the content posted by these influencer accounts.

It still might not seem so important until you realize that in Australia, 46 percent of 18- to 24-year-old persons look to social media as their primary source of news, over traditional outlets such as radio and television. There is very strong evidence that social media and these AI bots have been utilized for a number of years to influence the population. A study done in 2018 analyzed 14 million tweets over a ten-month period from late 2016 to early 2017 and found that bots were significantly involved in sharing articles from unreliable sources. Most recently, several large-scale, pro-Russian disinformation campaigns have been started and driven by AI to undermine support for Ukraine. On X alone, the campaign utilized more than 10,000 AI accounts to rapidly post tens of thousands of pro-Kremlin messages. The scale of the influence is very significant.

Share via
Copy link
Powered by Social Snap