1*CkTvJLvyYG-LCyAKO9rl2w

By Scott Hamilton

I have been around computers, programming them and using them, since around 1979. The first computers I experienced were very slow compared to today’s standards, but they were still very fascinating tools for me. I remember building a network between my Commodore 64 and my Commodore 128 with what was called a twisted pair serial connection network. These were very early network technologies where only a few computers could communicate with one another at a given time. If I remember things correctly we were limited to a maximum of eight devices in a twisted-pair network. The limitation was created by the 3-bit address limit on the network.

I suppose now that this will require an explanation of bits and how they work. A bit is the basic unit of information in a computer; it is represented by a zero or a one. Each bit has a place value, just like our decimal number system has a place value. In the decimal system the place values are power of ten. In the binary system, which utilizes bits, the place value is represented by powers of two. Just for a quick explanation, decimal place values are ones, tens, hundreds, thousands, etc. In the binary system it is ones, twos, fours, eights and sixteens. This means the minimum value in both number systems is zero and the highest in the decimal system is 999, while the binary is 1+2+4=7.

I bring this up because sending messages in binary across a wire between two rooms in the early 1990s, our limit was how fast we could send the binary signals. Each piece of information came across the wire as a sequence of 12-bit messages; it used 3-bits as an address to know which computer was supposed to listen to the message, followed by a sequence of 8-bits of data (allowing for 256 unique characters) and a parity bit, which was a check to make sure the data was not corrupt. The parity bit would be one if the sent value contained an even number of ones and zero if it contained an odd number of ones. As you can probably tell, processing this information one bit at a time was quite slow. In fact, I could type a short message in one room and run to the other room and watch it come up on the screen. I could have carried a handwritten note to the next room faster.

This process has been improved over the last four decades, but we still utilize the binary system to store and transmit data between computers and this sets an inherent speed limit on how fast our computations can run, as well as on how much data we can store and how fast we can read and write that information. I recently read an article about how it is the binary number system that is the underlying reason our computing speeds have reached a maximum in recent years, prompting the research into new and better technologies. Sadly the binary system is still creating bottlenecks for modern quantum and photonic computers, which are the bleeding edge technologies of today.

The issue lies in the ability to load the information into the processor; even quantum processors that have capabilities beyond comprehension are limited by the ability to rapidly access the necessary data to solve the problems. Recent studies are beginning to show that we can have a much simpler solution to speeding up computers than relying on probabilistic quantum systems and photonic memories to reach new levels of computational speed. What we really need is a new method of storing and accessing information. To describe the new research we need to define another term. This term is radix and it refers simply to the base of a number system. I already discussed how two such systems work, the decimal, a 10-radix number system and binary, a 2-radix number system. You saw fairly quickly that even at three digits, the decimal system could represent 1000 unique values whereas the binary system could only represent eight.

The newest idea in computer innovation is to create computers that operate on a high-radix number system, allowing for each digit to represent an extremely large number of values. The current research is working on a 125-radix number system, which means that a single digit has 125 values and a three digit number can represent 125+15625+1953125=1,968,875 unique values. Such a massive number system will allow for massive speed-up in communication and vastly smaller storage space as far as memory and offline storage is concerned. Researchers already have a working prototype in a simulator and expect that with the assistance of Artificial Intelligence to manage the complexity of the system, that they can implement a working Linux operating system on a 125-radix system in about 18 months time. For me this means that they can write the software for such a system much faster than they can produce a fully functional hardware product.

I personally look forward to seeing how they overcome the complexity required to implement high-radix computer systems electronically. I once worked on a system that was utilizing a 4-radix processor and it was an amazing technology that unfortunately became buried in patent law red-tape and never came into full production. Always remember there are always new ways of thinking about complex problems and really what this research entails is discovering a new way to count. Until next week stay safe and learn something new.

Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to shamilton@techshepherd.org or through his website at https://www.techshepherd.org.

Share via
Copy link
Powered by Social Snap