“The Great Awakening”
By Scott Hamilton
A few years ago I remember reading a history paper about “The Great Awakening,” which was a series of religious revivals in the American colonies and England during the 1700s. It was basically the church’s response to the Enlightenment. The things that happened during this period in history are similar to what we are facing today. During the Enlightenment science became the focus and people began to believe that if they understood enough about science, they could do anything they set their mind to do.
I recently read another report on Artificial Intelligence (AI) and the similarities in some of the promises made by supporters of AI and the thoughts out of the Enlightenment startled me just a little. You see, if you study history closely, you find that it has a tendency to repeat itself, and our modern Enlightenment is around this seemingly great new technology. Honestly it makes me ready for the next Great Awakening. I am not necessarily saying that this next Great Awakening will be of a spiritual nature, but rather an eye-opening experience much like the Great Awakening in 1700 was for many.
Exactly what do I think this new awakening will look like? It’s a little hard to tell, but I imagine it will have a lot to do with recognizing the dangers of AI. UC Berkeley Professor Stuart Russell and his postdoctoral scholar Michael Cohen believe that if left unchecked, powerful AI systems may pose an existential threat to the future of humanity. We are already dealing with a myriad of problems related to AI. Among the most impact is likely the spreading of disinformation and algorithmic bias.
Before I go too much further I probably need to explain those two issues. I will start with the easier one, algorithmic bias. It is really just a fancy term to say that an AI has the personality, traits, and conscience of its creator. Regardless of how careful we think we are about keeping our personalities out of our programming, every programmer is an artist of sorts, and the code we write reflects ourselves. Algorithmic bias simply means that when a programmer writes an AI algorithm it learns to behave like the author. There is a little bit of good news here in that most AI codes are too complicated to be written and trained by a single individual. However, the bad news is that most technical team managers look for programming staff that will mesh well together. This means that the staff will have similar backgrounds, personalities and beliefs. The end result is an AI with similar traits to the team that developed it.
The disinformation thing is a little harder to get a handle on, but it comes down to a simple fact. You cannot blindly believe everything you read on the Internet. We all know this and have our own examples of “fake” news. For most of us it is not too hard to spot, but an AI does not easily recognize false information. AIs do not have any real life experiences or exposure to the real world. Without these experiences AIs take everything they read as truth, and in the case of two facts that are contradictory, an AI will choose the more popular option. While that may seem reasonable, the issue arises that more and more of our news is AI generated and many of the comments on the news articles also come from AI. In the end it is estimated that nearly 70% of the new content on the Internet today is at least partially generated by AI. As these multiple AI personalities randomly interact with each other on the Internet they create an environment where it is ever easier for misinformation to be perceived to be the truth.
I think the next Great Awakening may just be a rude awakening where we have allowed AI to grow too powerful. We need to wake ourselves up and begin to figure out where to draw the line in the sand for AI. How close to the dangerous line of self-thinking and self-preservation should we allow AI to come before we shut it all down and halt the research?
Stuart Russell’s statements regarding AI hit a pretty hard wall. “Intelligence gives you power over the world, and if you are more intelligent — all other things being equal — you’re going to have more power. And so if we build AI systems that are pursuing objectives, and those objectives are not perfectly aligned with what humans want, then humans won’t get what they want, and the machines will.”
We are already giving machines bank accounts, credit cards, email accounts and social media accounts to allow them to fully interact with us. We want them to seem more human and be more autonomous. The less we have to change the way we interact with each other to interact with an AI the better. I know it might seem strange coming from someone who works in the industry and writes about technology, but I stopped trusting it a couple of years ago. I keep my budget, personal notes, and all my important documents on paper. I learned a long time ago that anything on a computer might as well be on a billboard and I prefer to keep some things as private as possible. I encourage you to do the same. I think the next Great Awakening will involve a shift back to paper and pencil communications and record keeping as we lose confidence and trust in technology. Until next week, stay safe and learn something new.
Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to sh*******@te**********.org or through his website at https://www.techshepherd.org.