“The AI Lie”
Photo by Pixabay: Neuralink Implantation Chips royalty-free stock illustration. Free for use & download.
By Scott Hamilton
I cannot count how many AI-generated fake news reels I have seen on Facebook, YouTube and Twitter in the last week. It honestly has begun to make me consider going offline completely and going back to publishing only in print media. I have been wondering since about this time last year, when I learned and shared that more than 70 percent of the content on social media is generated by artificial intelligence, just how long it would take before social media would be a repository of completely useless information. I firmly believe the time has arrived.
I love the fact that we live in a free country, and the freedom of speech and the free press allows us to say and print what we believe, but along with that comes the risks of non-truths. I am not sure what bothers me more, the flagging of misinformation by the social media giants over the past five years since the COVID-19 pandemic, or the recent lack of flagging satire and fake news reels. Over the last five days I have read about the fake explosion of a water tower in Branson, the city of Rolla partnering with a Native American tribe to turn “The Center” into a hotel casino so it can stop losing money, and more than one monster hurricane heading for landfall.
I really hate to admit that I have been fooled by a few of these “fake” news posts and videos, but it’s thankfully not too hard to verify these types of things with official sources. However, for some reason the social media giants didn’t think we were capable of researching information about the pandemic or alternative medicine cures. They had to flag those as misinformation and hide the posts. Yet for some reason we are suddenly smart enough to research news stories posted from seemingly legitimate news sources.
At first I was missing the misinformation flags, but then realized that if they could flag things as misinformation that their flags could also be incorrect and flag the truth as a lie; it really brought us nothing in the line of understanding the truth behind a post. So what brought all this discussion about the blatant lies posted as truth across social media is an opinion piece I read earlier this week titled “The AI Prompt That Could End the World,” by Stephen Witt. He talks about an AI agent engineering something dangerous like a lethal pathogen – some sort of super-coronavirus – to eliminate mankind. Dr LeCun, one of the leading experts in AI, says that the risks of such a thing happening are very real. In 2023 he said, “You can think of AI as an amplifier of human intelligence.”
When we take a quick look at history, nearly every major scientific discovery is used first for war. In the 1930s when nuclear fission was discovered, physicists concluded within months that it could be used to build an amazing bomb. While all AI engines in use today have in place at least some prompt filtering to prevent them from helping invent things like bombs and diseases, this filtering is not enough, as it does not control the autonomous layer of activity driving the response. We need some method of control over the hidden thinking process of the AI; without it we can never remove the risk.
We can already see evidence that AIs are excellent at telling lies and very convincing in generating false news events. I don’t know if you remember the movie “War Games,” from the 1980s, where a young boy accidentally breaks into a national security computer which controls the nuclear missile silos. He thinks he is playing a game and nearly starts World War 3. Now imagine that an AI makes up a story of Russia launching a nuclear attack against Ukraine, and that story somehow makes it to the White House, where it is believed. We would have the potential for World War 3 caused by an AI generated fake news story. I would hope our government would be wise enough to verify such information before taking action, but if our response system is automated, then it is possible for an AI to lie to the system and cause a counter attack to a non-existent attack.
It is these types of risks that have caused a lot of panic in the AI industry, almost to the point of creating another AI bubble collapse. I have lived through three of them where the extra hype in the technology fell short of expectations and the money for research disappeared. The current risk to AI today is kind of the opposite of that in the past. It is more about the fact that AI is getting so good at generating believable content that tells lies that it is becoming untrustworthy. It once was thought of as an amazing research companion; the fact that it is making up stories and presenting false facts makes it a rather useless tool for research. Unfortunately it is at the heart and soul of the Internet. It may be time to start over with fresh content generated only by people for people, instead of by AIs for people.
Until next week, stay safe and learn something new.
Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to shamilton@techshepherd.org or through his website at https://www.techshepherd.org.
