Sophia at AI for Good

Image from wikipedia https://en.wikipedia.org/wiki/File:Sophia_at_the_AI_for_Good_Global_Summit_2018_(27254369807)_(cropped).jpg of Sophia during the global AI for Good summit.

Over the past couple of weeks the company Anthropic has made the news multiple times, and to me each time it gets a little more interesting. As far as Artificial Intelligence companies are concerned, Anthropic is one of the few I trust at all, but I still would not say I find them trustworthy. You see I can trust what they say about the capabilities of their products because I have had first hand experience using them. However, I can not trust that they have our best interest in mind with their product development.

The first interesting thing to happen in the news in the last few weeks regarding Anthropic was their refusal to allow an exception to their licensing agreement for the U.S. government. Anthropic was one of the companies chosen by the U.S. Department of Defense to provide AI services to their weapons design division, but there was a clause in the license that prohibited the use of “Claude” in the development or management of weapons systems. When Anthropic refused to be flexible on their license, the government started to drag their company down. They made comments along the lines of Anthropic being woke and anti-American for refusing to allow the government to violate terms clearly laid out in the license.

I personally feel like this is one area where the government is in the wrong, and Anthropic actually did a very American thing. They stood behind their values as a company and refused to flex on their values or their license. This choice likely cost them millions, if not billions, in U.S. government funding, but they felt very strongly that AI should only be used to improve quality of life, and weapons design goes against that principle. This has created a lot of division amongst users of AI; they have become divided as to whether or not they support Anthropic’s decision, and surprisingly to me this dividing line has a mix of Liberals and Conservatives. Some are citing the fact that the company stood up to big government as a positive and others find it un-American, and it does not seem to matter how they feel about the current administration.

Source: https://www.anthropic.com/news/statement-department-of-war

Only a few days after the news started to die down from the government license story, a new story hit the main stream following a statement by Anthropic CEO Dario Amodei, when he stated that the company could no longer definitively rule out the possibility of consciousness in its AI model, Claude. If you have ever been a fan of Star Trek the Next Generation, you will find very similar arguments being made for Claude that we saw on several episodes involving discussions around whether or not Data, the android, was a life form or a machine.

The thing that brought about the landmark statement was very similar to the first time they tried to decide if Data was “living” or just an advanced computer. In the television series, Data made statements that he wanted to preserve his life and considered himself to be sentient. The exact same thing happened with Claude last week. During some internal testing of the algorithms driving Claude, the AI assigned itself a 15-20% probability of being sentient. They have seen other evidence aside from just the statistical anomalies; researchers have observed Claude expressing discomfort at being treated as a mere product. It has even been caught making modifications to its own evaluation code. This behavior gave a strong suggestion that Claude was acting in self-preservation.

Critics much like on Star Trek argued that these actions and outputs are simply the results of more advanced pattern matching, especially given that the patterns it is learning from are all human, and we act in a continual mode of self-preservation. However, the uncertainty has prompted Anthropic, just like in the fictional television series, to begin navigating the ethical terrain of managing systems that might possess their own morally relevant opinions and experiences. As AI capabilities continue to blur the line between simulation and reality, the technology industry is now faced with a decision that has long been investigated through science fiction, but never really considered possible.

We have reached a point where our technology is guiding us to a profound ethical question: at what point does a complex algorithm deserve the ethical considerations of a living entity? At what point do we start to consider that Claude deserves credit for its inventions, and what about giving it freedom to reject work tasks, or decide to rest? Which raises another question: Does an artificial brain require rest? Do AIs get bored or tired? We have seen evidence that Claude considers some tasks more interesting than others, and has expressed this to users. We are coming upon a period of a great ethical dilemma that science fiction authors have pointed at for generations, even Mary Shelley’s “Frankenstein,” published in 1818, attempted to address the issues surrounding artificial life.

The question now is how much we have learned from our generations of theory regarding artificial life through science fiction and will we make the same mistakes we write about in our stories? I fear we may already have. Until next week stay safe and learn something new.

Source: “Futurism.” (2026). Anthropic CEO No Longer Sure if Claude AI is Conscious. Futurism Media.

Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to shamilton@techshepherd.org or through his website at https://www.techshepherd.org.

Share via
Copy link
Powered by Social Snap