“AI on Antiques”
By Scott Hamilton
I am always fascinated by the people that are trying so hard to keep the old 8-bit computers not only functional, but useful. This last week I read about one such 8-bit enthusiast that developed and documented a Generative AI tool for the Commodore 64 (C64). I have written about new uses for the C64 in the past, including quantum processor simulation. This week we prove once again that, although modern computers have gotten a bit faster and have much larger memory capacity, they still cannot do anything we could not do in the 1980s.
Nick Bild developed a generative AI for generating the 8×8 pixel sprites, primarily used in classic game design. He is among a fairly large community of classic gaming and PC system enthusiasts that still develop new games for these classic systems. The idea behind his new AI was to inspire new game design concepts. I think in order for you to grasp the importance of what Bild accomplished, you need a little background on how generative AI works, and some of the modern uses.
Generative AI is the process of using a computer model with inputs from sources, like internet image searches, to create a new image based on an existing database of images. Some of the new uses of generative AI include generating images of people doing dangerous or exciting stunts. A great example is to take several images, or a video of a person walking across a field, and then generating a photo of them surfing on lava. The generative AI uses the video of the person walking to learn how that particular person moves; this allows the AI to generate new images of the person in different poses. After that point the AI can create images of the person doing any number of activities. These modern generative AI systems create images that are millions of pixels wide and deep, and the C64 is working hard to generate images that are merely eight pixels square, but in comparison to the modern systems the model ratios are nearly the same when compared to the display capabilities.
There are two very interesting things we learn from Bild’s experiment. The first is that while modern AI relies heavily on resources from OpenAI and very large amounts of data, you can use much smaller data sets and smaller sets of code to get the same type of results. The C64 is running a probabilistic PCA algorithm, which does require some training in order to generate usable images, and Bilt did take a shortcut by training the algorithm on a modern computer. This is not any different than how we run these algorithms on modern computing hardware, as these algorithms usually run on a laptop or PC but were trained on a supercomputer. I, however, happen to believe that, given enough time, Bilt could have trained the algorithm on the C64 hardware by utilizing a distributed compute model for the training, and spread the load across multiple C64s.
There has been a lot of debate over the last several years on what the minimum computing requirements are for “real” AI, but it seems to me that AI is possible on quite old hardware. Bilt makes the claim that with skill, determination and patience, almost anything is possible, even on the older computing platforms. I will say that there are much more practical applications for these older computer platforms than game sprite generation, but it is good to be reminded that when we utilize computers to their full capacity anything is possible.
It always makes me think, if AI is possible on a mere C64 when utilized to its maximum potential, what could we accomplish if we applied Bilt’s minimalist approach to the problem on a modern system that is nearly 1000 times more capable. The sky is the limit with our modern computing platforms, and yet instead of improving our programming techniques, we push for improving the platform. I feel we have reached the limit of computing capabilities and must once again make the shift toward improving software if we want to advance computer science.
There was a lot to be said for the early programmers; we were trapped in a little tiny box, with limited memory and limited processing power, but accomplished some pretty amazing things. In comparison to the 1980s we have unlimited computing power today, and instead of pushing it to the limits by using streamlined software, we are locked into using over-sized, shared coding resources,