How quantum computing can aid machine learning

Ever wondered why seating 10 people round a table at a dinner party is such a headache? In Part 3 of a series of articles by theoretical physicist Joerg Esser, you are about to find out.

Machine learning is about finding the best solution from among an enormous number of different combinatorial options. Quantum computing helps as it checks multiple options simultaneously and facilitates statistical sampling.

In the travel context, machine learning is about taking pieces of information (such as browsing behaviour, content preferences, length of stay and so on), and combining those in all manner of ways to decide one of two things:

  1. Which combination either matches known patterns best (in which case the machine recognises a leisure sun seeker, for example)
  2. Whether there are patterns that are persistent, but so different from known patterns that it is worth introducing a new pattern (the machine learns to tell a sun seeker from an adrenaline junkie, or a business traveller

See Part 1 - Quantum leap into a new kind of machine learning

The concept of ‘neural networks’ is commonly used to describe this form of learning. Different types, and pieces of information, are represented by layers and nodes, within the neural network. Patterns emerge from certain combinations of nodes being connected more strongly than others. Learning resembles the problem of identifying the most dominant patterns out of all possible combinations of nodes.

Mathematically, these sorts of combinatorial problems can be translated into searching for the lowest point in an unknown, such as a hilly landscape, where the number of hills and valleys scales with the number of possible combinations in the system. The only way to solve this is through trial and error:

  • Check all spots in the landscape for the nearest low points.
  • Compare all regional low points in the process to identify the lowest point globally.

More elegant solutions only apply in special cases, where the landscape is characterised by particular regularities and symmetries. In all other cases, checking every possible combination is the only way to find a solution.

Overwhelmingly complex

The challenge with the above sort of combinatorial problem is that they quickly become overwhelmingly complex. Imagine, for instance, the challenge of seating ten people at a dining table. There are about three million different possible arrangements. If you add in the different benefits of each pair of people sitting next to each other, you end up with an extremely complex landscape. No wonder seating arrangements for dinner parties are often such a headache.

Solving combinatorial problems with classical binary computing requires checking all the possible combinations sequentially, one by one. Statistical approaches have been developed that make it possible to identify ‘sufficiently good’ solutions rather than optimum solutions, a process known as ‘sampling’. However, sampling only allows you to achieve more than was possible in the past. The underlying challenge remains: The complexity of practical problems grows too quickly in line with system size – for example, the number of participants at our dinner party.

The algorithms used in machine learning have not changed over past decades. Progress in the field has primarily come from applying more and more brute force as computing power has increased. Quantum computing, on the other hand, goes far beyond Moore's law of increasing computing power: it represents a fundamentally different way of computing.

Quantum computing represents a fundamentally different way of computing

Machine learning lends itself particularly well to quantum computing. The fact that qubits  (explained in Part 2) can be in multiple states at the same time, makes new algorithms possible that check multiple options in combinatorial problems at the same time. In addition, the statistical nature of qubits makes probabilistic sampling easier.

Quantum computing is expected to be particularly suitable for modelling chemical reactions and materials, combinatorial optimisation, and sampling probability distributions. Besides its benefits for machine learning, it will probably advance the development of pharmaceuticals, operations management in logistics, and transportation (‘traveling salesman’ type of problems are among the hardest out there), as well as the statistical analysis of Big Data.

Quantum computing: still a long way off?

Quantum computing's performance on single problems should not be taken as an indicator of its maturity. While it vastly simplifies algorithms for some of the hardest problems, it has fewer advantages for linear and binary algorithms such as base arithmetic – multiplying two single-digit numbers is currently as far as it goes. This shows that the general application of quantum computing is still a long way off. However, it says nothing about the progress that has already been made on special applications.

The general application of quantum computing is still a long way off, but…progress has already been made on special applications

Here, we need to take a more nuanced approach. Hybrid computing, in which quantum computing is used in combination with classical computing to tackle particularly hard problems, is expected to be just a few years off.

Google, for example, has in recent months repeatedly claimed that it is nearing the finish line in the race towards ‘quantum supremacy’, whereby a concrete problem that takes too long (i.e. more than a hundred years) for classical computing to solve can be solved easily (i.e. within minutes) by quantum computing. 

Catch up here on Part 1and Part 2. Or join us in San Franciscowhere Joerg Esser, a former group director at Thomas Cook, and now consultant to Roland Berger will be moderating the mobile and technology tracks 

 

Related Reads

comments powered by Disqus