“The History of CGI”

By Scott Hamilton
I recently read a post on Facebook about Computer Generated Imagery (CGI) and its use in film-making. You cannot go to the movies today without seeing CGI content and I must say the new technology can make for some very exciting film scenes. However, this post left me questioning the true history of CGI.
I remember when I was in college in the late 1990s that computer graphics were just beginning to become mainstream; prior to that time period most computers were utilized for text based operations and only a few home computers had any ability to produce much more than block-like imagery. Granted film industry had access to much more powerful computers than the general public. The article linked on Facebook made a claim that the first film to utilize CGI was the movie “Tron,” filmed in 1982. I thought it was worth doing some research and finding the real history of the technology.
I was shocked to find that the first CGI used in film-making actually took place in 1958, way before computers were even in mainstream use for creating imagery of any kind. John Whitney utilized a nearly half-ton World War II aircraft targeting computer and some simple mechanical tricks to create the famous endlessly swirling spiral in the opening sequence of Alfred Hitchcock’s thriller “Vertigo.” This accomplishment instantly demonstrated the capability and potential for CGI in film-making. As we moved into the early 1960s CGI became more complex, allowing for more advanced graphic generation. In the 1960s we saw the first 3D model; it was first used to create a human face and eventually an entire short film. The film was a 49-second long animation of a car traveling down a planned highway created by the Swedish Royal Institute of Technology.
By the 1970s CGI broke into the mainstream when the seminal film “Westworld” used the technology to display the point of view of Yul Brynner’s android badman. The graphics were created utilizing a combination of existing film technology and computer graphics, resulting in the eye-catching red and yellow image that was intentionally pixelated to enhance the effect. In case you don’t know, pixelated means that the image was converted to a series of larger block-like components, like you see if you zoom in too far on a digital photo today and see the square pixels.
The first time CGI was actually seen as a mainstream film-making technology was in the 1982 movie, “Tron” which demonstrated to the world the potential CGI held. Tron created an entirely new virtual world. The film marked an important step forward in the use of computer graphics for unimaginable uses. It opened the door a year later for “Star Trek II: The Wrath of Khan” to place the actors in alien landscapes. The techniques used in these two films were the foundation for computer graphics becoming one of the most compelling reasons to improve the capabilities.
The 1990s movies contained a list of mostly CGI-based animated movies, and even some impressive live-action films including CGI characters interacting with real actors. Among the most impressive works were “Terminator 2” and “Jurassic Park.” By the time we reached 2009, the film “Avatar” brought CGI characters to life as a majority of the film was CGI characters with limited real live actors. “Lord of the Rings” generated entire armies for battle scenes as well as fantasy characters. CGI led to a new interest in 3D movies in the early 2010s and in recent years the technology has been used to make older actors look like their younger selves.
As CGI continues to advance it creates new problems. The term “uncanny valley” originated in the 1970s, referring to the design of robots that seemed too human. CGI has brought that to reality in films. It is now possible to generate CGI characters with realism that can be mistaken for real people and real events. These new graphic capabilities enabled some impressive non-film based technologies as well, especially in the medical field. As a result of these CGI improvements primarily done for the film and video game industry, we now have the ability to see unborn babies in 3D on ultrasound machines, we can see full rendering organs on PetScans and MRI machines, and can detect disease much earlier than before.
We also use CGI in art and architecture; you can now take a virtual walk through a new home, building, or even an entire city from architectural drawings before land is cleared or even foundations laid. You can see how a new piece of furniture looks in your room before bringing it home, how new siding will look on your home before it is installed, and even try on virtual glasses before buying your next set of frames. CGI is even used to create models for 3D-printing that can be turned into real products. It amazes me that in just a little under 50 years we have gone from special effects in films drawn by hand on negatives in “The Wizard of Oz” to entire films being generated by computer, and video games that generate real time player interactions with 3D avatars. Where will we see the technology go in the future?
Until next week stay safe and learn something new.
Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to sh*******@te**********.org or through his website at https://www.techshepherd.org.