There are several methods for assessing cybersecurity risks: not analysing the risks at all, analysing them qualitatively, or quantifying them (Cyber Risk Quantification, or CRQ).
“In companies, cybersecurity decisions – where they exist – rely on essentially qualitative analyses. These analyses are influenced by cognitive biases, mental shortcuts that help us take quick decisions in situations that we recognise or believe we have already encountered. In most cases, these intuitions, experiences and expertise can be helpful. However, in truly complex and strategic situations, they can make for poor advisors,” says Christophe Forêt, co-founder and CEO of C-Risk.
To cope with this complexity, there are models that facilitate the financial quantification of cybersecurity risks, more or less accurately and easily. “These models use mathematical methods to weigh the pros and cons and look for contradictory opinions. The aim is to reach more objective, justifiable decisions that would lead to statistically comparable solutions if they were replicated by other people,” says Christophe Forêt.
FAIR, a standard created by a CISO in 2005
The Factor Analysis of Information Risk (FAIR) is one such CRQ model. It is an Open Group standard devised in 2005 by Jack Jones, then CISO of insurance company Nationwide. Its taxonomy describes a menu of components that contribute to the frequency of an event and the extent of financial losses that may be incurred if this event were to happen.
“The FAIR model uses estimated data ranges and matching levels of confidence to make use of uncertain information. It models the frequency of events, the inspections and the scale – in both impact and financial terms – of claims,” says Christophe Forêt.
The model helps break down the risk into variables that can be estimated not in discrete values, but in ranges that reflect the “minimum”, “most likely” and “maximum” values. Using Monte Carlo simulations, the same formula can be evaluated thousands of times using values selected from among the intervals. This generates a probabilistic distribution of the amounts of potential future losses.
“Ranges have the advantage of saying, for example, that the risk of ransomware represents between €500,000 and €4 million for a given company. It’s much more specific than saying “it will cost you a lot” or “it’s a red risk”. This helps us correctly define what a risk is: a financial loss from an event involving an asset,” says Christophe Forêt.
Many benefits for companies
One of the main benefits of this method is being able to quantify a significant number of scenarios. “When we look at a group of risks, we start by sorting. After a few hours, we will be able to give an estimate before going further into detail depending on the use cases. And once we can see there is a case, we try to determine whether there might be knock-on effects with internal and external costs and time scales that do not always match those of solutions providers,” says Christophe Forêt.
Quantification supports an insurance director who is looking to see if the coverage they have taken out reflects their true exposure to cybersecurity risks. “Sometimes, companies say they have €5 million of coverage, with a deductible of ‘only’ €500,000. But if we look deeper, we can see that the €5 million covers all the claims in a tax cycle, whilst the deductible is by type of loss. With quantification, we know that none of the risks, by loss category, will reach the deductible amount,” says Christophe Forêt.
Another example: operational security teams working on certain protective solutions that struggle to find common ground. “Generally, for encryption, these teams may not agree on which method to use. Should they encrypt the data, the database, the operating system? What are the related costs? Going further down the taxonomy, we can provide more comprehensive analyses to guide decision-makers. In some large companies, this can represent millions of euros of investment,” says Christophe Forêt.