Home >News >Look Before you Quantum Leap

Look Before you Quantum Leap

Niall Kennedy COO

YellowDog’s Product Manager, Niall takes a closer look at the recent achievements Caixa Bank made in the field of quantum computing for risk analysis.

He discusses how some of the future benefits noted in Caixa’s tests can be accessible today, through the implementation of a hybrid and multi-cloud platform. 


Hats off and kudos to Caixa!

Perusing financial technology news recently, my attention was drawn to an article that really intrigued me in FinExtra.

The headline stated that “CaixaBank tests quantum computing for risk analysis”. Wow! How cool is that?! The execution time of the analyses the bank was testing was reduced from a benchmark of several days to a few minutes – a massive improvement, achieved by implementing a quantum algorithm, drastically reducing the amount of simulations ordinarily required in risk analyses. Think of the day-to-day, practical applications; being able to run complete analyses in minutes, to re-run them if market conditions change dramatically, or with more parameters to get more insightful outcomes, or simply to re-run analyses when there is an input or model error. Hats off and kudos to Caixa!

Why the fuss over quantum computing?

A quantum computer uses quantum mechanics so that it can perform certain kinds of computation more efficiently. Current computers use electrical circuits as bits, but quantum computers will run on quantum bits, or ‘qubits’, which are tiny particles magnetically suspended in an extreme, cold environment. Unlike an electrical circuit, where a bit is 1 or 0, qubits can simultaneously take on the role of both the 0 and the 1.

A note of caution

Fascinating as the results produced in the test are, ubiquitous quantum computing and the commercial reality of running it are some way off. Caixa did sound a note of caution to this effect further on in the article and I would have no issue sounding cautious. Quantum computing will ultimately allow us all to reap huge benefits. But what about today?

Quantum computing: today

To state the obvious, demands on Risk Analysis are huge today and a likely driver for Caixa’s forward thinking. A weight of regulation has been and is being enacted in order to attempt to ensure market and institutional stability – the Fundamental Review of the Trading Book (FRTB), Large Exposure Framework (LEF) and Single Counterparty Credit Limit (SCCL), to name a few. In addition, market stress ameliorates the requirements on having risk analyses in a timely manner and we’ve had quite a number of “stresses” in recent times.

To me, time savings were the big thematic outcome of Caixa’s test – I’m sure the levels of savings and the resultant possibilities would be the envy of any institution. However, few institutions would have access to quantum compute to take advantage of the possibilities available; the skills to operate and deploy software, as well as the ability to write new or re-write existing code are big barriers to entry. If institutions do enter, a sizeable proprietary and expensive tech estate, alongside a strong skill base needs to be acquired and maintained.

Is there another way?

Though dramatic, the time savings seen by Caixa are not achievable without major investment in both tin and code. However, clever use of hybrid and multi-cloud resource can have huge impacts on both the timeliness and robustness of execution of risk analyses today. In particular, combining compute available on-premise, in private and public clouds, means the collective compute power available can make the execution of risk analyses far quicker than they are now. Sure, you need to ensure risk analyses applications, and the processes within, are capable of running “embarrassingly” parallel, but given the nature of the beast, this should already be the case (for example, applications that include Random Forest algorithms, Monte Carlo simulations etc.).


So, what do you need to achieve this? In short; a hybrid and multi-cloud platform which manages the sequencing of processing tasks, allows for inter-dependencies between tasks, provides quick and efficient data transfer between “base” and processing instances and provisions compute (both on-premise and multi-cloud). Thus, ensuring that sufficient compute is requested, managing compute resource “drop-outs” and failures automatically.

Sound like a tall order?

It can be. There are a myriad of nuances that need to be worked out. Ultimately, the management platform should abstract the complexities of dealing with cloud and on-premise resource, provide consistency of interaction, thereby allowing institutions to concentrate on building and executing risk models – without worrying about constraining model complexity or frequency of execution due to compute limitations. The practical applications I referred to at the beginning of this blog are then within reach, without further investment of tin and code.

I believe that YellowDog provides this platform, making some of the hard to reach benefits of tomorrow, available today. Look before you quantum leap.

Share this: Latest News Articles
Mark Noctor joins YellowDog as Chief Commercial Officer 09.05.2024
YellowDog enables Nextflow users to go Hybrid and Multi-Cloud 08.11.2023
Developing virtual twin hearts with Hybrid Cloud 24.07.2023
YellowDog Please upgrade your browser

You are seeing this because you are using a browser that is not supported. The YellowDog website is built using modern technology and standards. We recommend upgrading your browser with one of the following to properly view our website:

Windows Mac

Please note that this is not an exhaustive list of browsers. We also do not intend to recommend a particular manufacturer's browser over another's; only to suggest upgrading to a browser version that is compliant with current standards to give you the best and most secure browsing experience.