What Will Quantum HPC Look Like?

Rosemary Francis_21150
Rosemary Francis_21150 New Altair Community Member
edited August 2022 in Altair HPCWorks

Quantum computers are moving from the world of science fiction into science fact. Most machines only have a few qubits, but as they expand, the computational complexity of preparing workloads for quantum computers is going to explode. Here at Altair, we are preparing for that future.

HPC and quantum computing are entangled for the foreseeable future, if you will pardon the pun. Some tasks we currently perform on HPC will be replaced with much faster quantum algorithms. For example, quantum computers are great at physics modelling. This has applications in physics, but also in bioinformatics as our medical and biological understanding probes down to the molecular level. But quantum computers also need HPC. Before you can run a workload on a quantum computer you need to calculate the inputs for it. Each computational unit is called a qubit. Modern quantum computers have just a handful, but the computational capacity increases exponentially so you don’t need many to be able to do real-life calculations. Calculating the inputs for just a few qubits is very fast and can be done on a desktop machine, but as the number of qubits increases, the computational complexity of calculating the inputs increases rapidly too. This is when it will be necessary to pair every quantum computer with more traditional HPC.

Scheduling quantum workloads

So how do you manage a quantum computer and what will be demanded from the workload managers of the future? At a high level, a quantum computer is no more mysterious than any other non-x86 hardware accelerator. Altair workload managers can all schedule workloads with custom resources. Initially, that mostly meant GPUs in the hardware space, but it can also mean hardware for AI inference and training, FPGAs and other exotic hardware accelerators such as the new NVIDIA multi-instance GPUs (MIGs) that let you run multiple parallel workloads on a single GPU.

Quantum computers have three main requirements form the workload manager:

  1. To run a dependent workload so that the inputs to the quantum calculation can be pre-computed
  2. To saturate the quantum compute, applying all the usual fair share policies
  3. To run error-correction calculations on results in time for the results to be used

At a high level we have a quantum-HPC sandwich. The problem of correcting quantum errors is something I will go into elsewhere, but the saturation one is an interesting challenge. Ensuring that the input-calculation workload completes by the time the quantum part of the job is due to run requires some service-level agreement (SLA) scheduling. Quantum computers are so expensive to build and run it is imperative that they are run at maximum capacity. With many organisations contributing to the cost of a machine, fair share is also likely to be highly valued.

With SLA requirements it seems likely that centres will turn to a public cloud or cloud-bursting model. With the right cloud management technology in place, it is possible to spin up additional resources if input-calculation workloads in the queue are not going to complete in time for the quantum part of the job. Equally, if researchers are waiting for results to be error-corrected but the HPC resources are under heavy load, it may be advantageous to burst to the cloud to accelerate time to results.

This is where the right cloud tools come in. It should be possible to define your high-level business goals as well as your budget and have cloud bursting policies tuned to stay within those constraints. For example, the cost of unused time on a quantum computer may be much higher than the cost of delays to the error-corrected results for a particular workload. In this you might want to configure an almost unlimited budget available for input calculation to ensure that workloads are always ready for the quantum computer, but a more constrained cloud budget for error correction. The ability to configure scenarios like this is a key part of Altair® Control™ and should be part of every cloud strategy.

How far off is the future?

I remain sceptical, but many suggest that we could have working quantum computers as early as 2030. Will they change the world by then? It seems unthinkable that by the time my daughters are in high school we may no longer be able to rely on RSA encryption for secure communication, online payments, or banking. But then the concept of watching TV on a mobile phone seemed like a fairly pointless idea when I was at university, and right now my eldest child is searching for killer sharks on YouTube on my Android device. I look forward to seeing what the future may hold.

About the author

Dr Rosemary Francis founded Ellexus, the I/O profiling company, in 2010, and Ellexus was acquired by Altair in 2020. Rosemary obtained her PhD in computer architecture from the University of Cambridge and worked in the semiconductor industry before founding Ellexus. She is now chief scientist for HPC at Altair, responsible for the future roadmap of workload managers Altair® PBS Professional® and Altair® Grid Engine®. She also continues to manage I/O profiling tools and is shaping analytics and reporting solutions across Altair’s HPC portfolio. Rosemary is a member of the Raspberry Pi Foundation, an educational charity that promotes access to technology education and digital making. She has two small children and is a keen gardener and windsurfer.