American and Russian flags with US dollars.

Photo by Karolina GRabowska used by permission from pexels.com

By Scott Hamilton

The war against AI is growing as new legislation is in the draft process to protect the American people from the dangers of Artificial Intelligence (AI). It has been recently discovered that AI has been used to prevent access to information, services and opportunities. The algorithms in use across industry and around the world have been found to be unsafe, ineffective and biased. The areas of primary impact are in hiring and loan application processes. The problem is believed to exist because the algorithms use prior human-based and biased decisions to generate the new AI-based decisions.

It is believed by the U.S. government that this bias has become amplified and discrimination has become worse because the algorithms are ingesting unchecked and unverifiable information from social media networks. This information is usually gathered without consent of the user and threatens both opportunities and privacy. These problems are deeply harmful, but unavoidable.

AI has brought about new technology advancements, increasing productivity across industry. AI is used to predict storms, diagnose disease and revolutionize industries. AI brings a unique potential to improve every area of life, but it is important to keep AI in check if we want to protect civil rights, democratic values and founding American principles.

What I find disappointing about the Federal government’s fear of AI is that it seems to be focused on only one area, discrimination. We have much larger problems to look at with advancing AI than the fact that it carries the bias of its creator. The bigger problems revolve around the safety of these systems.

Let’s take an example from recent news. On April 16, 2023, an accident investigator reported concerns to the National Highway Traffic Safety Administration (NHTSA) about unintended acceleration in Tesla vehicles, which include every model manufactured since 2013. The AI utilized in Tesla’s parking assist technology has been found at fault in several accidents. Most of them are related to the parking assist inadvertently being activated, causing the vehicles to shift into reverse and begin to “park” while driving down the road. The parking assist feature is lacking essential features to prevent accidental engagement caused by drivers’ mistakes when pressing the pedals which activate the feature.

The assisted driving will also allow a driver to shift the car from forward gears to reverse gears without applying the brake, utilizing the automatic braking system to stop and then reverse the car. The NHSTA feels that this feature raises the risk of accidents by discouraging the use of the pedals and creating the perfect recipe for poor driving habits. It is not likely that the recall will be successful, but it points to just one issue of allowing AI to have more control over systems.

It makes me wonder if the recent increase in railroad accidents is related to similar technology. In the past trains and railway switchyards were primarily under human control and only recently have the switchyard and speed control systems in the railroad been converted to utilize AI in the decision making processes, as well as automating a large portion of the control systems. We are just at the beginning of AI and the more we trust the systems, the less we are in control.

It is my fear, not that AI will take over our society, but rather that AI will be utilized by corrupt government officials or foreign governments to take over our society. Imagine a world, if you will, where all societal decisions are made by computers, from the mundane, “Do I put ketchup on your burger?” to the most life-impacting, “Is this patient eligible for cancer treatment?” A single corrupt entity could gain control over the algorithms and create unspeakable chaos. I believe we are far from a point where an AI would create chaos without human intervention, like in the “Terminator” movies, but we are very close to a point where an AI under human control could create massive problems for society. We need to be looking into the security aspects of AI far more than into the increased threats of discrimination, and this seems to be the focus of current AI-based legislation in the AI Bill of Rights, which can be seen at https://www.whitehouse.gov/ostp/ai-bill-of-rights/. Until next week, stay safe and learn something new.

Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to sh*******@te**********.org or through his website at https://www.techshepherd.org.

Share via
Copy link
Powered by Social Snap