Blogs

Ask an Actuary – David Wright Part 1: How a Catastrophe Model Works

By Agatha Caleo posted 01-03-2018 09:04

  

Hurricanes!  Cyber risk!  Hail storms!  Catastrophe modeling sounds dramatic and exciting; David Wright.JPGis it really that crazy?  I chatted with Beach Group’s Head of Analytics in North America and host of the Not Unreasonable podcast to find out.  David Wright, ACAS, CFA, breaks down how a cat model works and tells us what it’s like to work with them on a daily basis.  He was kind enough to offer me a significant amount of his time, so there’s a lot of information here, and we decided to break this up into two posts.  This is Part 1:  How a Catastrophe Model Works.  (Come back in a couple weeks for Part 2:  Being a Catastrophe Modeler.)

 

Future Fellows:  What type of inputs do you need for a catastrophe model and where do those inputs come from?

David Wright:  To run a cat model, you need location information for all the buildings that form part of an insurance portfolio and associated values.  You need to know occupancy and the construction types of the buildings.  Those are the main features.  You can add a lot of complexity to a cat model by requesting a lot more information.  The information all comes from the client, so they’ll have to produce the cat modeling input data, which isn’t typically used for any other process inside an insurance company, not even in a lot of cases for rate making -- although that’s not always true.  This is a unique data generation process for an insurance company; they don’t ever do it for anything other than this, so the data will often not be as clean as it would be if it gets looked over for other purposes.  So a lot of what we have to do is scrub the information, make sure it’s complete, make sure we have what we need to run the cat model, and oftentimes we have to make estimates for information that’s missing.

 

FF:  What if an event has only occurred once before or never?  How do you determine distributions for those parameters?

DW:  You make them up, is the short answer.  You kind of make educated guesses; that’s the real answer.  So you’ll have one example of a particular event, but in reality you know quite a lot of things about an event from one example, and you can make adjustments to certain parameters of those events.  For example, you can have a hurricane that hits Miami.  Say a hurricane like Andrew, which was a category 3 when it hit, and you can say, “Well, what if this storm was a category 5?”  Now you can make some assumptions about how that might affect the impact of the hurricane, and now you have two points on your distribution, and on it goes. 

When the vendor modeling companies take what they call their storm catalogues or their event catalogues, they’ll take actual historical events and they’ll adjust some of the characteristics of those events to make up new events – imaginary events – that can be things like, for a hurricane for example, where the storm will make landfall, what the angle of landfall will be, what the wind speed will be, how long the wind will persist, what the storm surges are—these are all characteristics of a storm that you can adjust and will have physical relationships to each other.  There are an infinite number of storms that you can imagine, and based on the measurements of the storms that you know, you can infer what the other storms will do to properties, to buildings, and so you can build a distribution of the component parts of the storm that you observe.

 

FF:  Once you have your inputs and you know some of the parameters that you’re dealing with, how does the model work?  Do you generate outcomes using Monte Carlo simulations or something more complex?

DW:  That depends on the model; there are lots of ways of approaching it.  When we build models that are in-house as opposed to vendor models that are licensed, we have more information about how the model is run because we do it ourselves, and in those cases we’ll typically use a simulation model where you’ll have a bunch of parameters, and you’ll have distributions, and you’ll generate random numbers, and generate outcomes based on those distributions and the characteristics of the policies.   

The classic distinction in terms of the output that you get is usually made between an RMS model and the AIR model, two vendors.  The AIR model can give you a list of simulation iterations, so it’ll be usually 10,000 years of events.  Now you have these 10,000 years from which you can measure certain tail quantities, so you know, the 1 in 1000 year event is the 10th largest year in that list of 10,000 years.   The RMS way of giving you output is to actually give you a unique listing of events with a probability of occurring, so a frequency probability, and also certain parameters that would parameterize a severity distribution.  With RMS it’s a Beta distribution (I think in all cases).  You can simulate events under the RMS event loss table, or you can use analytical techniques to actually find, analytically speaking, what would the 1 in 1000 be based on all these frequency and severity distributions.  That’s more precise, so you can tend to get better tail estimates for portfolios using that process, although it’s a lot more complicated, a lot less user friendly.  AIR has the advantage of simplicity of communication, but it’s harder to get a precise estimate.

 

FF:  How often do you use vendor models vs in-house models?

DW:  I would say mostly it’s going to be vendor models in certain lines of business.  It depends on what we’re talking about, so for hurricane risk it’s harder to come up with an estimate that you can demonstrate is superior to the vendor model because there’s so little information, because they’ve put so much work into developing their event catalogues, and because they understand more – or the model “understands” more – about how the storms affect buildings than you can probably come up with yourself.  Think of it this way:  there’s almost enough information for the vendor model to use to produce a good estimate, but not enough information for everybody else to do it, too.  For example, for an earthquake in the United States, there’s been 1 event in the last 30 years of any consequence – that’s the Northridge Earthquake.  You have other events that happen in other parts of the world, but you have a much thinner data set, and so for something like an earthquake model, there’s the possibility of a lot more error in the vendor model, so maybe one would be more likely to come up with your own model for the unobserved events. 

If you have no events whatsoever in your catalogue, think of the risk there being like cyber.  You don’t have very much experience at all, so it’s really hard to build a model for that.  You don’t understand the complexities; you don’t understand how the risk is manifested in the insurance business.  You probably have much simpler models, and you have more diversity of models, and so you’re probably going to do something yourself.  Systemic risks in casualty books—there are models out there that you can use, but realistically it’s hard to demonstrate that those models are better than an ad hoc model that you could build yourself, so you might default to something you can understand better and that you don’t have to pay for – or at least you don’t have to pay a license fee for; obviously you pay for the time that you’re building your own model.  The short answer is it really depends on the risk.

 

FF:  What are the outputs and what do you hope to get out of the model?

DW:  You hope to get out of the model two things.  One is an expected loss cost, which will go into determining premium adequacy for your book.  The second is an estimate of volatility that will help you manage your portfolio.  There’s a difference in the sense that you want to be price adequate – that’s obvious, in the sense that you want to make money over time – but you also want to make sure that the downside risk for an overall portfolio is something that your organization – your capital levels – can sustain.  That is about measuring things like diversification within a portfolio amongst different risks as much as it is about measuring how volatile a single risk is.  So you have individual risk modeling and you have portfolio modeling.  …  What you get out of the model:  you get databases, and those databases contain results from the model, which is in the form of these event loss tables – or “year loss tables” is what they call the output from AIR.  Those are listings of events that you can use to measure things like the expected loss and the volatility of the portfolio.

 

Please log in to leave your comments or questions. 

And check back in a couple of weeks for the exciting conclusion with Part 2:  Being a Catastrophe Modeler – to be continued!

1 comment
3 views

Comments

01-11-2018 09:03

Good blog to pass along.

Thanks for this blog. I will pass it along to others in my company that hear a lot of the terms you used in here, but might not fully understand that they mean. We'll look forward to the follow up post, as well.