Benchmarking in Cyber Insurance: Evolving to a Continuous Approach
All organizations, no matter how prepared, face a certain level of cyber risk. Supplying an organization with cyber insurance means accepting a portion of this risk as determined by the potential loss and impact of scenarios and the likelihood of those scenarios occurring. While cyber insurance carriers may not be able to predict events and their impact with total accuracy, they often use risk modeling strategies to better identify, predict, and prepare for risk.
At Resilience, we maintain a collection of risk models powered by AI systems and paired with world-class underwriting teams to understand the likelihood and severity of business interruption from ransomware, data breaches, and countless other scenarios. This AI-human partnership allows us to build tools that not only better predict risk but deliver actionable outputs for non-technical underwriters and non-insurance security professionals. This means that we can not only better underwrite a risk, but work with the client to improve their cyber resilience.
Working with these models over the past three years has led to some higher-level insights we think are important to share with readers. Here’s what we’ve learned about modeling risk:
Prediction isn't about perfect data.
At Resilience, our models exist to inform our underwriters and are used as guidance, not as a framework for requirements. We utilize cyber risk modeling to measure the ROI of security protocols and to inform risk transfer, not determine it. This strategy is based on the philosophy Doug Hubbard and Richard Seiersen share in How to Measure Anything in Cybersecurity Risk. They believe, and our experts agree, that the most important time to model risk is when data is scarce and the threat landscape is unstable.
Resilience can capture hundreds of signals into our models that come from an array of sources, including automated scans, the underwriting process, and our threat team’s ongoing collaboration with our clients. Our experts understand that not every signal will be available for every client, and sometimes we might not have enough information to strictly use data-based modeling. Instead of instilling punitive restrictions against clients in these cases, we adopt Bayesian reasoning to help us understand what these signals might be, based on the signals we know about. This allows a more flexible approach to working with brokers and clients in the application process.
Bayesian analysis provides a rigorous framework that quantitatively captures our security experts' views on inferring missing signals. The core idea of Bayesian reasoning is to calculate the probabilities of certain events based on our “prior” knowledge of those probabilities, as determined by the data we observe. When data is scarce, this update is minor, and the “credible region” — the Bayesian analog of confidence interval — in which a parameter might lie can be very large. Even wide-ranging quantitative estimates can serve as a guidepost to help us determine where we may need more data.
Understanding this data helps clients determine which security investments have the most financial impact, which in turn empowers Resilience’s underwriting team to offer more favorable rates and improve our models.
Data should support (not dictate) decision-making.
A scan shows a vulnerability in an asset belonging to a potential insurance client. Is the threat real, or an artifact of the scan? If the threat is real, is it something the client should fix, perhaps after a conversation with a broker?
The answers to these questions depend on many factors. An organization's industry, size, revenue, or public notoriety can all affect the risk of a breach.
At the same time, there are dozens of competing frameworks for assessing cyber risk. The risk modeler or underwriter must learn how to draw from the wisdom of all of these frameworks without engaging in paralyzing decision-making or relying on benchmarks that may not be universally applicable. Meanwhile, security experts have to engage individualized assessments and notifications to clients while managing which alerts are actually relevant to the organization and their unique risks.
Recommended by LinkedIn
Rather than deciding outcomes, risk modeling should empower the security, risk, and finance functions to achieve their goals, so clients can manage cyber risk holistically as a team. By seeing estimated probabilities of adverse events, we can help organizations forecast losses over a given period of time. This provides data for concrete discussions with leadership around risk appetite and risk transfer through insurance. We can also identify which vulnerabilities and security controls should take precedence when assigning a limited budget. Ideally, these tools can help our clients triage the most important risks, spending their time (and money) on what matters.
Don’t forget a human touch.
It’s happened before: an underwriter sends a note to our data science team. A potential customer in the financial sector has a risk that rates in the fifth percentile among its peers. The underwriter has gone through the individual signals with a security expert and doesn't see any clear signals of excess risk. What's going on?
In cases like this, context is critical. The financial services industry is one of the most heavily regulated industries when it comes to cybersecurity. These regulations can require significant investments in cybersecurity measures, and thus this financial institution has been compared against a small set of other financial institutions that also have excellent controls. Its low percentile is an artifact of a comparison to a transparent segmentation of other industry customers and not any security flaws.
This is why human underwriting experience is critical. At no point should customers be punished by an automated process. And as we start to adopt new AI technologies into products, we must remember this lesson on context. Our underwriters are empowered, not controlled, by our models.
While there's no tool that can replace experienced judgment, a thorough risk model is an invaluable asset for any throughout the years. This foresight of potential risk allows clients to prepare for any potential incident, better inform stakeholders of their risk, and avoid damaging financial impact.
Thank you for reading. Please share if you found this helpful, and leave us your comments below. We'd love to hear from you!
About the Author
E. Hunter Brooks works at Resilience as Director of Data Science. Hunter holds a BA in Math and Linguistics from Dartmouth, an MA in Math from the University of Maryland, and a PhD in Math from the University of Michigan. Prior to arriving at Resilience in 2021, he worked for Oracle, analyzing unusual traffic patterns to detect malicious botnets. He has consulted with start-ups on machine-learning algorithms and published research on algebraic geometry and its applications to cryptography. In his spare time, he likes to make pottery and play bridge.