Reserve Musings

Some further musings about loss reserving.

1. Why do we develop reported losses? By definition, they’re correlated to paid (Reported = Paid + Case). Does a projection of reported losses convey anything new and meaningful to us? Here’s a simple experiment: project reported and paid using whatever means you think are appropriate. Take the projected difference between the two and tell me how often you get case reserves that are either negative or make little sense whatsoever. There are models (Munich Chain Ladder, Halliwell’s seemingly unrelated regression equations) which attempt to resolve this, but they’re not often used in practice. Moreover, even when they may be used, they’re merely one of a set of estimates which, individually, probably have conflicting assumptions.

I suppose what I’m really trying to say is this: we should only model atomic variables. It’s appealing to model reported losses, but we should ensure that the individual components have been handled properly. If they’re not, we should question what additional value- if any- is gained when modeling a composite variable like reported losses.

2. Following on from that thought and as part of my sinking further into MRMR, I’m prepared to divide the world of reserving models into three categories:

1. Models which use static predictors. This will be the additive method. Here the predictor is typically something like on-level earned premium or exposure. The predictors are (within the context of a loss reserve model) non-stochastic. They’re BLUE and have a number of other convenient properties.

2. Models with autoregressive stochastic predictor. This is the multiplicative chain ladder. Here, the response is used to generate the next predictor variable. Because the predictor variables are themselves stochastic, we have to treat the variability with a bit more caution.

3. Models with dependent stochastic predictors. This is analogous to frequency/severity methods. Here, the response from one variable is used as the predictor for another. There is an order of operations which enables the fit and projection, but it’s one which has some appealing intuition. So, earlier when I pose the question of why we model reported losses at all, what I would propose is the following: model the case reserves separately and add them to the modeled paid losses. To incorporate a relationship between the two, regress incremental paid against prior outstanding case. that ought to make a fair bit of sense. Depending on how mature the losses are, paid losses ought to bear a strong relationship to the outstanding reserves. In turn, the case reserves may be modeled using a static predictor (case 1) such as earned premium. Or, they may be modeled dependent on another stochastic predictor such as open claims. Open claims may either use static predictors or in turn depend on a function of reported and closed claims.

I’ll admit that this all starts to get a bit crazy, but I also feel that it gets a bit closer to reality. Decomposition of aggregate losses into more manageable components allows us to focus on elements that are easier to think about an explain. I love the additive model, but recognize that there are inherent limitations. It doesn’t speak to the issue of rising frequency or changes in severity of claims. In that way, it’s about as dumb as multiplicative chain ladder, with its slow, inexorable march toward ultimate.

That third class of model is something which will soon find its way into MRMR. (And by “soon”, I mean in about a year or so. Finding time to work on this stuff is a real struggle.)

As always, thoughts are welcome.

Advertisements

5 thoughts on “Reserve Musings

  1. I guess I see two parts to this blog post: (1) what data one should use and (2) classes of models.

    On (1):
    I have always been in the camp (a camp of 1?) that sees no need for consistency between modeling of paid losses and reported losses. I just view then as two separate metrics and am comfortable that the modeling of those metrics may produce inconsistent results. I have used Munich chain ladder – but I don’t see the absolute need for consistency.

    That being said, I fully agree with you that we *should be* modeling atomic vectors. Hoverer, as is the case with much of modern actuarial work – convenience and familiarity trump theory (and innovation).

    1. Thanks for taking the time to comment. I’ll say that I might not yet be ready to join your camp of one. Although I understand- and to a large degree sympathize with- that view, inconsistency bothers me. It bothers me because there is only one stochastic process at work. That process is complex, dynamic and very possibly unknowable, but I think there’s just one. So depending on what you mean by consistency of modeling, I might not agree.

      What I will say (and this might be what you meant) is that just because paid losses fit well with a particular model, we should have no expectation that reported will fit just as well with that same model.

      Convenience and familiarity aren’t the only things that trump theory and innovation. I have a job and two children and I’m settling into my 40’s. I get tired much quicker than I used to.

      1. To be clear – my criticism of lack of innovation in actuarial practice was not directed at the blog author. Quite the contrary, as I believe just publishing this blog (along with other research and papers written by the blog author) will/do help spur innovation.

        My comment was on the state of actuarial practice more generally. I have been critical of this for years as common practice (not anyone specifically) seems not to have moved beyond approaches developed in the 1960s and prior. (A topic for another time and another day.)

      2. Sorry, that last comment was meant to be tongue in cheek. I’ve often said that with 30 hours in a day and plenty of strong coffee, I’d have a robust, detailed reserving model inside of a week. I assume other folks struggle to find time for research. Moreover, as much as I love loss reserving, I’d probably rather watch cartoons with my kids.

        That understood, I share your feeling that the practice as a whole is moving too slowly. Zehnwirth published his research back in the ’80s, but I still don’t think folks have taken the time to consider it properly. (To be clear, I’m not suggesting that his approach is the end of the story, merely that we’ve had 3 decades to learn from it.) The society leadership seems keen to embrace leading edge techniques and there are a number of practitioners moving forward. There is, however, a great challenge with regard to an industry which wants to make use of predictive modelling, but still insists that its actuaries produce a Bornhuetter-Ferguson loss reserve estimate.

  2. Naïvely, both paid and incurred losses are predictors of the same random variable, so by developing reported loss, we create another estimate of the desired value. How different the two estimates are helps give us a measure of volatility or uncertainty. Also, in my limited experience, outstanding losses often have predictive power about the paid losses, in that larger outstandings can indicate larger future paid losses better than the paid losses alone would. Of course, the desideratum is a proper model of paid loss, but reported loss, despite, or perhaps because, of being correlated with paid losses can help provide a better model for predicting said loss.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s