“Does AI impact our thoughts?”

Sophia at AI for Good

Image from wikipedia https://en.wikipedia.org/wiki/File:Sophia_at_the_AI_for_Good_Global_Summit_2018_(27254369807)_(cropped).jpg of Sophia during the global AI for Good summit.

By Scott Hamilton

A few months ago I wrote about a study done by a group of psychologists who were studying Artificial Intelligence (AI). In that article I wrote about how the AI would be directly influenced by the creators of the AI. As a result it is highly possible that any AI will have a pre-programed bias based on the bias of the programmer. I came across an article last week about a group of experts in the field of AI that have shown it may actually be the other way around.

Psychologists Lucia Vicente and Helena Matute from Deusto University in Bilbao, Spain, published a paper in Scientific Reports explaining a very concerning issue. The pair raised concerns that as more professionals rely on AI tools to assist in the decision making process, they may inadvertently adopt the opinions of the AI.

The pair conducted an experiment in which two groups of medical doctors were faced with choosing a diagnosis and treatment plan for a patient. One group had access to and utilized an AI system to assist in the diagnosis and the second group relied only on the information provided. Each group was given the same information about the patient and symptoms and both groups were well trained doctors. The group utilizing the AI trusted the incorrect diagnosis provided by the AI regardless of the fact that they had the same information as the unassisted group.

The researchers also went on to show that the influence of the AI was lasting.

“In our research, we hypothesized that people who perform a (simulated) medical diagnostic task assisted by a biased AI system will reproduce the model’s bias in their own decisions, even when they move to a context without AI support,” the researchers wrote.

They explained that, through their study, they showed that people can be convinced to make wrong medical decisions just by using the AI, even after the influence of the AI is removed.

In the press release about the study, the researchers raised major concerns that humans are at “risk of getting trapped in a dangerous loop.” The main concerns are around the fact that we have been conditioned to trust the results of a computer over nearly eight decades of trusting electronic calculations. We have conditioned ourselves to believe that a computer, given the correct information, will provide a correct result. The old adage used by many computer programmers was “garbage in, garbage out,” meaning that poor programming will result in poor results. We have been conditioned to believe that it also means if we give good input, we get good advice, which is not entirely true.

The negative biases of AI can impact our decision making process and cause us to make poor decisions. The more we trust the AI, the more likely we are to continue down a path of wrong decision. The end result of the study was that we may need more regulation in the use of AI to guarantee free and ethical use. I am also absolutely certain that this is just the beginning of a new field of psychology – the study of AI-human interaction and the impact in both directions.

Every time I read about these influences and the seeming personalities of AI it makes me wonder if we are coming ever closer to achieving an AI that is self-aware, sapient, and sentient, like Data from Star Trek: The Next Generation. In the series, the android, Data, learns more and more about human life and becomes closer and closer to achieving his desire for a human emotional experience, but never quite achieves it. When I chat with modern AI based search tools like ChatGTP and Bing’s ChatBot, at times they can feel almost human, but the human aspects soon fall away.

Until next week, stay safe and learn something new.

Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to sh*******@te**********.org or through his website at https://www.techshepherd.org.

Share via
Copy link
Powered by Social Snap