“ Artificial Intelligence Race Halted”

Background image was generated by Fotor_AI http://www.fotor.com using the first two sentences as prompts.
By Scott Hamilton
In a surprising act this week, a group of over one thousand people signed an open letter calling for a pause on training artificial intelligence (AI) systems more powerful than GPT-4. The non-profit Future of Life Institute drafted the initial letter and saw an overwhelming response by the research community. The non-profit is against powerful AI systems being developed before, “we are confident that their effects will be positive, and their risks will be manageable.”
Among the risks outlined in the letter were risks to humanity and society, mainly around the spread of misinformation and widespread automation of jobs. I am not sure how to feel about the risk of misinformation spreading as we seem to do a fine job of spreading information without the help of AI. The big problem is there is a misconception that a computer cannot lie. People tend to have the false belief that computers are perfect and would believe anything generated by an AI. This misconception comes from the thinking that an AI does not have a preconceived idea or a personal agenda. While computers do not have emotions, or opinions in and of themselves, an AI can learn an opinion based on the data used to train the system.
Let’s take a simple example; during the COVID-19 pandemic nearly every opinion against vaccination was blocked on social media. The banning of posts on the mainstream platforms caused a spin-off of new social media platforms. The result is a fracture in the data, splitting it along political boundaries. A majority of right-wing political people utilize different social media platforms than the left-wing. The AI platforms are trained by siphoning data from online data sources. The popular chatGPt platform that sparked the letter was training on data gathered mainly from Facebook, Apple and Google, all of which were responsible for censoring right-wing political ideas during the training period. As a result, chatGPT has a left-wing political stance and cannot be trusted to share non-opinionated results. Other AI platforms could be trained utilizing a broader set of data, but the issue remains that a computer can be trained to hold a political stance. Herein lies the risk of an AI spreading misinformation.
As for the automation of jobs, we have already seen cashiers, front-line fast food workers and assembly line workers replaced by computer algorithms and robotics. The real risk is that GPT-4 has proven to be effective at performing much more advanced tasks. You can utilize GPT-4 to generate legal documents like wills and trusts, replacing legal aides, write news stories of current events and even generate source code for new computer programs and tasks. We are entering a time when even jobs once safe from automation are beginning to be replaced by computer algorithms and systems.
OpenAI developed one of the “smartest” AI’s to date with GPT-4 and successfully passed the bar exam as well as scoring top scores on several AP exams. Future of Life Institute’s open letter stated that it was not looking to pause general work on AI development, but rather to take a step back from the “dangerous race.” I have been against the utilization of AI technologies for a long time personally, as we will be putting a lot of confidence in systems programmed and trained by a very small portion of the population. This has the potential to bring about major changes to society driven by the few at the top. There is a risk to personal freedoms and liberty as these AI systems become integrated into every aspect of life. I am glad that I am not the only one working in the technology industry that has begun to see the risks of AI.
I think the thing that shocks me the most is the list of names on the letter include several of the top researchers in AI and heads of some of the largest AI companies. It could be compared to the heads of all the major oil and gas companies signing a letter calling for more investment in green energy and a decline in the utilization of oil products. This makes me believe that GPT-4 has scared the leaders of the AI industry by how rapidly it is learning and the sheer number of tasks it is able to accomplish with the current resources at its disposal. We are drawing ever closer to a “SkyNet” scenario unless we pause and learn how to properly constrain AI to prevent a dangerous outcome.
Until next week, stay safe and learn something new.
Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to sh*******@te**********.org or through his website at https://www.techshepherd.org.