Synthetic Creation
Last week I wrote about “Synthetic Salvation,” the transhumanist view of life after death. A Transhumanist believes that there is no afterlife and as such, focuses all their energy and research around extending life, both naturally and artificially. A lot of very interesting research comes out of transhumanism; among the leading technologies are the development of stronger and more human-like artificial intelligence. OpenAI is the leading research firm when it comes to artificial intelligence and their latest creation does something very interesting.
OpenAI’s latest announcement is their chatGPT application. ChatGPT is a sibling of their InstructGPT application. Both are very interesting inventions, to say the least. InstructGPT was trained using a fairly standard language model called GPT-3, which is simply an artificial intelligence that was designed to read, write and interpret languages. The biggest problem with these self-trained AI models, meaning that they did not have direct input from a human other than reading human documents, is that they tend to generate outputs that are untruthful, toxic or reflect harmful sentiments. This happens because they are trained to predict the next word in a given phrase based on a large set of Internet text. The predictions are not aligned with what the end user wants. GPT-3 is the underlying AI used in the predictive text on Facebook and Google, as well as the auto-correct on text messaging. We have all had a major failure with auto-correct.
InstructGPT was trained with reinforcement learning from human feedback (RLHF) rather than the blind training of GPT-3. During the training phase, where the AI learns the best responses, staff at OpenAI told the AI which of several top choices would be the expected choice. The staff at OpenAI labeled over 1.3 billion responses, narrowing the possible outcomes from the 175 billion originally generated by the model. As you can guess though, this will give the AI a bias towards the political, religious and cultural background of the trainers that are labeling the data. InstructGPT’s main job was to take English instructions and perform a given task, for example: “Explain the quantum computing to a six-year-old in a few sentences,” and the AI will respond with a truthful and easy to understand description of quantum computing.
ChatGPT took the original idea of InstructGPT and expanded upon it to allow it to do much more complex tasks. ChatGPT can take complex instructions and produce things like complete research papers on a given topic, prepare legal lease agreements that favor the landlord or tenant in a given state or provide a recipe using a list of ingredients in your pantry. I have to agree that the advancements being made in AI are astonishing, but the end goal of AI, to me, is a little scary.
Elon Musk began his development of the computer-brain interface because he fears self-trained AIs will begin to see humans as the enemy, and this tight integration will allow humans to remain in control of the advanced AI. I have researched AI for a number of years and remember my first experience with AI. It was around 1979 and there was a computer program that played twenty questions written on the Apple II computers. The program was designed to learn from asking you questions. This was far from a real AI as it simply stored your answers in a database for future reference. I never forgot the time I was thinking of an elephant and it guessed a mouse. It asked me what the difference was between an elephant and a mouse, and I accidentally typed, “An elephant has a truck.” From then on, every time it was about to guess elephant or mouse it asked, “Does it have a truck?” There was no way to remove the false information from the AI. The same thing happens with AIs today. No matter how well trained the AI, if it is trained with false information it will always have the wrong answer and a computer cannot tell fact from fiction.
It scares me to think of an AI that finds the “Terminator” movies and believes we are out to kill it. We are getting very close to a world where computers will learn on their own and we may not always be able to control from where they are learning.
To learn more about chatGPT, or even give it a try, you can visit http://chat.openai.com and sign up for a free account. I might let it write my next article on transhumanism just for fun. Until next week, stay safe and learn something new.
Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to sh*******@te**********.org or through his website at https://www.techshepherd.org.