top of page

Monte Carlo Simulation & its Applications

The Monte Carlo simulation approach derives its name from a casino in Monaco, and undeniably casino games are inherently linked to probability theory.


It is a technique that has been around probably since World War 2, developed by mathematicians who worked on the Manhattan project. It is a very robust approach, that can be applied to a wide variety of domains and helps us understand and quantify the distribution of risk and losses.

For actuaries looking to model tail risks for new product designs, reinsurance strategies, stop-loss programs, Insurtech designs, and non-traditional applications, the Monte Carlo simulation approach is worth serious consideration.


How does it work? Put simply, we use a computer program to simulate outcomes for the risk we are studying.

To do this, first we need to study the data underlying the risk. Based on the data, we identify a suitable probability distribution, or a set of distributions to be used.


Typically in the actuarial approach, we split the risk by frequency and severity. Frequency means the number of claims that occur in a period of time, and severity means the amount of each claim.


Frequency typically follows a Bernoulli (eg dead/alive), Poisson (rate of occurrence where the variance is equal to the mean), Negative Binomial (rate of occurrence where the variance is greater than the mean) and other distributions.


The severity of the claim may follow normal, lognormal, gamma or other distributions.


We will need to fit suitable distributions using visual techniques (eg histograms, QQ plots etc) and statistical measures (K-S measure, Anderson Darling, etc) - there are programs which can do this very conveniently for us, but we still need to exercise judgment in selecting the right distribution.

Typically a good fit at the tail of the distribution is more important in this application.


Once we have fit the distributions, and tested it adequately, we will then need to calibrate the mathematical relationships that describe the links in the system that we are modelling.


Next, we use a programming language, typically R or Python (or any others, excel is rather inefficient at this), and simulate a large number of outcomes (typically 1000 or more) for all of the different "model points" in our portfolio. A model point is one of many personas, profiles or starting points in the group we are studying.


R, Python and other languages have specially built libraries that make it easy to generate random variables for a large variety of distributions.


Once we have run the simulations, then we can use it to study the distribution of risk, quantify worst case outcomes, and use it for decision making.


Due to the simplicity and adaptability of this approach, we can use it for non-traditional applications, climate risk modelling, and many other areas.


Let's also remember - all models are wrong, but some are useful.

Real life can never be fully captured by mathematical models, and we must be keenly aware of the assumptions made and limitations of the approaches that we take.

Comments


bottom of page