Last week we discussed Machine Learning and I promised to talk about ways to improve machine learning techniques.  The latest of these is through the application of Quantum Computing to machine learning algorithms.  I know what you are thinking, “This article is about new technologies.”, but you would be wrong.
Machine learning was first invented before the modern computer.  It was invented in 1949 by Donald Hebb and based on interactions within the human brain. Hebb wrote, “When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.” Translating Hebb’s concept to machines we narrow it down to a weighting mechanism between artificial neurons and their relationship with each other.

In neural networks today we base the learning outcome on a scoring mechanism where the score determines the decision. For example, in a self-driving car, a high score might mean, “hit the brakes there is a child in the road.”, and a low score means, “keep moving everything is fine.” We base this scoring on a series of artificial neurons that add together all the various inputs of the system to arrive at a score. 

The neural network is trained by adjusting the weights of the links between the neurons through feeding in a known set of training data and adjusting the weights until the expected score for the training set is output correctly for every item in the training set. The training requires adjusting every neural link many times and processing the entire training set after each change, as a result the training can take weeks.  This is where quantum computing comes in to play.

Quantum Computing, which is also not so new, first conceptualized in the early 1980s by Paul Benioff and very shortly after Richard Feynman and Yuri Manin suggested that a quantum mechanical computer like Benioff proposed could perform calculations that are out of reach for classical computers.  This comes about because of their ability to examine the results of multiple input scenarios simultaneously.

Quantum computers can be used to determine the weighting of the links between the artificial neurons in a fraction of the time because of their ability to test all weights simultaneously, or all input simultaneously depending on the exact learning method employed.  The first method is to use the quantum bits (qubits) to represent the neuron weights, allowing you to test all the input data in sequence and pick the best weights for the training data. The second method is to represent the data with the qubits and test all the possible weights in sequence.  The fastest method depends on the size of the neural network and training data. If your quantum system is the large enough you want to represent the larger of the two using qubits.
To make it a little easier to understand let us think about a very simple test case with four neurons and a training data set with 32 values. This would require you to check weights of four links requiring 16 total tests, for each of the 32 input values for a total of 512 tests using classic machine learning.  Because the qubits can set all possible values at once if we represent the four weights with four qubits we run all weights at once and only perform 32 tests.  If we represent the training data with the qubits we can run all inputs at once and only need to run 16 tests.  If we bring these values into current training model sizes you begin to see the power. Modern quantum computers are limited to 52 qubits which allows for two the 52 power weights that can be tested simultaneously, which will reduce the total number of steps for training a 52 neuron network by roughly the number of atoms in the universe.
 
 
Share via
Copy link
Powered by Social Snap