At the Insurance Data Science conference, both Eric Novik and Paul-Christian Bürkner emphasised in their talks the value of thinking about the data generating process when building Bayesian statistical models. It is also a key step in Michael Betancourt’s Principled Bayesian Workflow.
In this post, I will discuss in more detail how to set priors, and review the prior and posterior parameter distributions, but also the prior predictive distributions with brms (Bürkner (2017)).

Ahead of the Stan Workshop on Tuesday, here is another example of using brms (Bürkner (2017)) for claims reserving. This time I will use a model inspired by the 2012 paper A Bayesian Nonlinear Model for Forecasting Insurance Loss Payments (Zhang, Dukic, and Guszcza (2012)), which can be seen as a follow-up to Jim Guszcza’s Hierarchical Growth Curve Model (Guszcza (2008)).
I discussed Jim’s model in an earlier post using Stan.

This is a follow-up post on hierarchical compartmental reserving models using PK/PD models. It will show how differential equations can be used with Stan/ brms and how correlation for the same group level terms can be modelled.
PK/ PD is usually short for pharmacokinetic/ pharmacodynamic models, but as Eric Novik of Generable pointed out to me, it could also be short for Payment Kinetics/ Payment Dynamics Models in the insurance context.

Today, I will sketch out ideas from the Hierarchical Compartmental Models for Loss Reserving paper by Jake Morris, which was published in the summer of 2016 (Morris (2016)). Jake’s model is inspired by PK/PD models (pharmacokinetic/pharmacodynamic models) used in the pharmaceutical industry to describe the time course of effect intensity in response to administration of a drug dose.
The hierarchical compartmental model fits outstanding and paid claims simultaneously, combining ideas of Clark (2003), Quarg and Mack (2004), Miranda, Nielsen, and Verrall (2012), Guszcza (2008) and Zhang, Dukic, and Guszcza (2012).

Last week I wrote about Glenn Meyers’ correlated log-normal chain-ladder model (CCL), which he presented at the 10th Bayesian Mixer Meetup. Today, I will continue with a variant Glenn also discussed: The changing settlement log-normal chain-ladder model (CSR).
Glenn used the correlated log-normal chain-ladder model on reported incurred claims data to predict future developments.
However, when looking at paid claims data, Glenn suggested to change the model slightly. Instead allowing for correlation across accident years, he allows for a gradual shift in the payout pattern to account for a change in the claim settlement rate across accident years.

On 23 November Glenn Meyers gave a fascinating talk about The Bayesian Revolution in Stochastic Loss Reserving at the 10th Bayesian Mixer Meetup in London. Glenn worked for many years as a research actuary at Verisk/ ISO, he helped to set up the CAS Loss Reserve Database and published a monograph on Stochastic loss reserving using Bayesian MCMC models.
In this blog post I will go through the Correlated Log-normal Chain-Ladder Model from his presentation.

I continue with the growth curve model for loss reserving from last week's post. Today, following the ideas of James Guszcza [2]I will add an hierarchical component to the model, by treating the ultimate loss cost of an accident year as a random effect. Initially, I will use the nlmeR package, just as James did in his paper, and then move on to Stan/RStan[6], which will allow me to estimate the full distribution of future claims payments.

Last week I posted a biological example of fitting a non-linear growth curvewith Stan/RStan. Today, I want to apply a similar approach to insurance data using ideas by David Clark [1]and James Guszcza [2].Instead of predicting the growth of dugongs (sea cows), I would like to predict the growth of cumulative insurance loss payments over time, originated from different origin years. Loss payments of younger accident years are just like a new generation of dugongs, they will be small in size initially, grow as they get older, until the losses are fully settled.

We released version 0.2.2 of ChainLaddera few weeks ago. This version adds back the functionality to estimate the index parameter for the compound Poisson model in glmReserveusing the cplmpackage by Wayne Zhang. Ok, what does this all mean? I will run through a couple of examples and look behind the scene of glmReserve. However, the clue is in the title, glmReserveis a function that uses a generalised linear model to estimate future claims, assuming claims follow a Tweedie distribution.

This is the third post about Christofides’ paper on Regression models based on log-incremental payments[1]. The first postcovered the fundamentals of Christofides’ reserving model in sections A - F, the secondfocused on a more realistic example and model reduction of sections G - K. Today's post will wrap up the paper with sections L - M and discuss data normalisation and claims inflation. I will use the same triangle of incremental claims data as introduced in my previous post.

Following on from last week's postI will continue to go through the paper Regression models based on log-incremental paymentsby Stavros Christofides [1]. In the previous postI introduced the model from the first 15 pages up to section F. Today I will progress with sections G to K which illustrate the model with a more realistic incremental claims payments triangle from a UK Motor Non-Comprehensive account:# Page D5.17tri <- t(matrix(c(3511, 3215, 2266, 1712, 1059, 587, 340,4001, 3702, 2278, 1180, 956, 629, NA,4355, 3932, 1946, 1522, 1238, NA, NA,4295, 3455, 2023, 1320, NA, NA, NA,4150, 3747, 2320, NA, NA, NA, NA,5102, 4548, NA, NA, NA, NA, NA,6283, NA, NA, NA, NA, NA, NA), nc=7))The rows show origin period data, e.

A recent post on the PirateGrunt blog on claims reservinginspired me to look into the paper Regression models based on log-incremental paymentsby Stavros Christofides [1], published as part of the Claims Reserving Manual (Version 2)of the Institute of Actuaries.The paper is available together with a spread sheet model, illustrating the calculations. It is very much based on ideas by Barnett and Zehnwirth, see [2]for a reference. However, doing statistical analysis in a spread sheet programme is often cumbersome.

© Markus Gesmann CC BY-NC-SA 3.0 · Powered by the Academic theme for Hugo.