- 09/01/2021
The Problem
Many financial models such as asset allocation, electronic trading and pricing are derived from historical data. Such models can easily be incorrectly trained or biased, and therefore it’s important to backtest them to understand the expected performance and the underlying risk. Backtesting is commonly done using out-of-sample data. For example, if data from 2010-2018 is used to develop a model, one can use the data from 2019-2020 to backtest it. Such a backtesting method is insufficient, as the out-of-sample data is often limited and does not cover all market scenarios. Synthetic data, which is machine generated but has similar characteristics to its historical counterparts, can come to the rescue and complement traditional backtesting. This enables backtesting on many unseen events, reducing the likelihood of overfitting and illuminating potential model failures.
The accuracy of such a method, however, depends on the quality of synthetic data. Developing better ways to generate synthetic data is thus a worthwhile effort. Using generative machine learning to produce synthetic data is one way of doing it. We thought it would be interesting to see if quantum machine learning could do a better job.
The Challenge
Research has shown the potential for quantum computers to enhance machine learning. Many of the methods require loading data into quantum states or employing variational quantum circuits. Often such methods are not scalable or are heuristic and prone to errors.
What We Learned
Quantum generative models outperformed classical generative models on certain datasets producing higher quality synthetic data faster. They have the potential to be the first class of quantum machine learning models to be deployed for real business use.
Even small quantum computers can be used to demonstrate advantage over classical models on a real dataset.
Team & Tech
FCAT worked with IonQ to create a Proof of Concept that pushes the boundaries of quantum machine learning. IonQ’s cloud-based quantum computer allows FCAT and IonQ researchers to jointly develop new quantum machine learning models and deploy them on quantum hardware. This allows FCAT to evaluate the performance, stability, and scalability of these new models. IonQ’s cloud-based quantum computer utilizes trapped atomic ions of Ytterbium as qubits. The current quantum hardware has 11 qubits.
Next Steps
FCAT will continue to monitor advances in various quantum hardware and explore quantum algorithms for machine learning, optimization, and other business applications so that Fidelity is prepared to use the technology and realize its potential.
The Deep Dive: Quantum Generative Models
FCAT researchers discovered a correspondence between quantum entanglement and the dependence structure within data and used this insight to develop new quantum generative adversarial network (QGAN) and quantum circuit-born machine (QCBM) models. These models are hybrid quantum-classical algorithms and can tolerate noise on NISQ (Noisy Intermediate-Scale Quantum) devices. They can efficiently learn the dependence structure within data. Once trained, these models can generate high quality synthetic data. For further details on this project, read the full paper here.