Confit de Stanard, Part 1 – The Rant

I’m breaking this up into two parts to isolate the political elements from the purely objective presentation. I’ll start off with some very biased comments about the legacy of Stanard’s Monte Carlo reserving example. The next post will present that model in R.

It’s now been over 30 years since Stanard published his famous (to US P&C actuaries, anyway) paper. Halliwell recently expressed some disappointment about its disappearance from the syllabus; a sentiment that I share. Or, to put it differently, I’m sorry that nothing comparable seems to have replaced it. Moreover, I’m struck that the most important conclusion of his paper isn’t really stated in the paper at all and is only touched on tangentially in a paper by Venter (which, to be fair, is on the syllabus).

There are two vital statements which every reserving actuary ought to get tattooed somewhere on their bodies. The first comes from Dave Clark, who had the decency to connote the importance of what he was saying by using an exclamation point. It was, “Abandon your triangles!” a notion as succinct and practicable as it is correct and useful. The other statement comes from Venter and is far more subtle. It reads as follows:

The fact that Stanard used the simulation method consistent with the BF emergence pattern, and this was not challenged by the reviewer, John Robertson, suggests that actuaries may be more comfortable with the BF emergence assumptions than with those of the chain ladder. Or perhaps it just means that no one would be likely to think of simulating losses by the chain ladder method.

(OK, it’s actually two statements.)

No one would be likely to think of simulating losses by the chain ladder method. And yet we’re ok presuming so when we predict how losses are reported and settled. Is everyone comfortable with the notion of assuming two different models, one for how losses are generated and one for how we predict the way that losses are generated? Is this really what we assume?

There is at least one instance when I would simulate losses by the chain ladder method: low frequency, high severity property events (which themselves produce quite a few high frequency, low severity losses). I think that multiplicative chain ladder probably works fairly well for nat cat events, once an initial set of losses has emerged. Exposure, be it insured value or earned premium, is a pretty lousy data item to use for prediction of those sorts of losses. There may also be some merit in taking this approach when looking at a new class of mass tort exposure. However, I think that in this instance, a structured approach which takes specific exposure information may be more worthwhile (if a bit more work). This is largely a factor of the length of the settlement period.

So, I’m ready to admit that multiplicative chain ladder may work well in a fairly narrow set of circumstances. But I have to posit a loss generation process that mirrors the way that the chain ladder method works. To wit: I examine the pool of losses at a relatively early stage of the development process and presume that they will settle similar to the rate at which losses have settled historically.

As I type this, I’m coming up with counter examples: demand surge, infrastructure differences driven by different areas of the country, etc.

For the record, I’m no fan of Bornhuetter Ferguson. I think the idea is novel, but often rubbish in practice. The a priori loss ratios used in most applications that I’ve seen (or participated in, let’s be honest) are wishful thinking at best.

What Stanard demonstrated was a process of individual claim generation and settlement which seemed intuitive. However, no one posits a method to use available data to fit that process. Stanard generates individual claims. Can we not use that data to draw meaningful conclusions?

The next post will render Stanard’s model in R. The third post will calibrate a model which uses individual loss characteristics to forecast reserves.

At some point in the future, I’d like to cobble together enough time to have a look at the CAS’ loss simulation model. Comments on how that works are most welcome.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s