“Probability Computing”
Photo by University of Kentucky: The University of Kentucky exhibit is staffed by a number of people.
By Scott Hamilton
Last week I wrote about one class of probability computing, Quantum Computing, which is the most well known class of probability computing. In case you didn’t read last week, probability computing is based on a concept of storing the probability that a bit is either one or zero instead of storing just the one or the zero. This probability is a model taken from quantum physics where we can’t know until we measure the location and direction of an electron spin in a particle. In quantum computing and quantum physics this is known as superposition.
The second class of probability computing is by the use of p-bits, or probability bits. In classic computing, a bit is always either 0 or 1; in quantum computing a q-bit is in superposition state (randomly zero or one with an equal probability); while a p-bit has a directly controlled probability of being either zero or one. You could say you always know the exact value of a bit, a p-bit fluctuates between zero and one, and a q-bit is both zero and one simultaneously and only locks into its true value when measured.
There is a second property of quantum computing that is actually fairly easy to repeat with p-bits, and that is the property of entanglement. This is where two bits directly mirror one another, always holding the exact same value, or in this case probability of a value occurring. There are great advantages in using p-bits for a particular class of problems, as it turns out it is the same class of problems well suited to quantum computers. These problems focus on machine-learning and combinatorial optimization. My favorite type of problem to think about and try to solve is the combinatorial optimization problem. The classic combinatorial optimization problem is that of the traveling salesman.
The traveling salesman problem is a path optimization, given a list of cities and lists of transportation details between the cities (i.e. train, plane, automobile), you are challenged to find the fastest/most time efficient route to visit all the cities, by using any of the available methods of transportation. While this problem seems fairly easy at first glance, pick the closest city first and then move on to the next closest city, etc. However this approach does not always work, as going to the closest city might result in having a centrally located city wind up being visited last, actually being less optimal than taking a longer flight in the middle of the path to reach it earlier. This problem is what computer scientists call NP-hard, which means that with each city added to the list the time it takes to find the optimal path increases exponentially. For two cities, we evaluate one path, for three cities we have two paths, for four cities we have six paths, and for ten cities we have 362,880 unique paths.
So you might wonder how p-bits can bring an advantage to problems like the traveling salesman, and it is kind of hard to grasp, but you can program the probability of the p-bits based on distances between the cities and then search for what is called the ground state of the probability system; this is the minimum total value of the system. Since we are using probability to store the information and know the initial state of each p-bit, when we find the minimum we know which sequence of p-bits created the minimum and we then know the shortest path. This becomes searching for one value through a list instead of searching for an optimal path, which has to be re-evaluated with each change in direction.
Given this information, it made me wonder why we are not focusing more efforts on p-bits that seem more energy efficient, less prone to error, and operate at room temperature instead of on q-bits, which require massive amounts of energy to maintain their quantum state and are prone to error. The answer lies somewhere unexpected. It comes from the design of our current classic computers. There was a design shift about 30-years ago where processors began to operate on “words” instead of bits. A word is a group of bits treated as a single object; modern processors have a 64-bit word length. This makes it possible to use a single word to track the probability of a p-bit, but because the processor is designed to operate on the word as a whole, it is difficult and inefficient to manipulate single bits within the word. As a result, it is inefficient to perform the operations necessary to manipulate the p-bit and get good results. To bring real efficiency to p-bits requires a major rework of processor technology, and sadly the research funds are all going toward the hype of quantum computing.
Ultimately it comes down to the funding of the research; everyone is excited about quantum computing and the q-bit, leaving the less known p-bit in the background and unfortunately in scientific research, sometimes the better technology gets less funding because it is less challenging and less exciting. While the p-bit may have less funding, I expect to see it overtake the q-bit in coming years as researchers realize that the energy requirements for quantum computing are unsustainable. Until next week stay safe and learn something new.
For more information on P-bit and the research at the University of Kentucky please visit their site at https://aggregate.org/EXHIBITS/SC25/
Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to shamilton@techshepherd.org or through his website at https://www.techshepherd.org.
