Estimation of Reinsurance Risk Value Using the Excess of Loss Method

As with any other business that has a risk of any incident in the future, the insurance business also needs protection against the risks that may arise in the company so that the company does not lose. Therefore, the need for anticipation in organizing any claims submitted by the insurance company to Reinsurance Company so that insurance company may assign any or all of the risks to reinsurance companies. In the method of reinsurance excess-of-loss there is a certain retention limits that allow reinsurance companies bear no claims incurred on insurance companies. The results of this study showed the average occurrence of claims and the risks that may be encountered by Reinsurance Company during the period of insurance. The magnitude of the risk assumed by the reinsurer relies on the model claims aggregation formed from individual claim size distribution models and distribution models the number of claims incurred in the period of insurance. Besides the magnitude of risk was also determined from the retention limit of insurance and reinsurance method used.


Introduction
Today many loans are offered in a particular payment in Indonesia, and many individuals, community groups, or in the business world do credit and not a few who fail to pay because creditors die (claims occur) before the credit is paid off. This must be considered by reinsurance companies because in the event of a claim the insurance company will submit a claim to the reinsurance company. Therefore, many credit businesses insure these loans to insurance companies to avoid these risks (Dickson, 2016). The need for insurance services is increasingly being felt by both individuals and businesses in Indonesia. Insurance is a financial tool in household life, both in the face of fundamental risks such as the risk of death or in the face of risks to property owned. In general, insurance companies in managing their risk use risk-sharing methods, one of them is reinsurance. Insurance companies can transfer some or all of the risks faced with reinsurance companies (Bowers et al., 1997). Centino (1995), studies the upper limit as a function of retention, for managing excessive losses, and e- ISSN 2722-1156p-ISSN 27722-1164 comparing it to the probability of bankruptcy. Centino said that the upper limit given by Lundberg injustice could be increased for probabilities on a limited horizon, as demonstrated by Gerber in 1979. Mata and Verheyen (2005), developed a methodology for estimating loss layer trend factors based on infinite trend factors, distribution of severity and boundary profiles. The excess trend is divided into components of frequency and severity. It also presented a methodology for estimating exposure adjustment factors needed for overburden for the distribution of the projected boundary profile. Najafadi-P, A.T. and Bazaz-P, A. (2017), in their article combining proportional and stop-loss reinsurance contracts, and introducing a new reinsurance contract called proportional-stop-loss reinsurance. By using the balanced loss function, unknown parameters of proportional stop-loss reinsurance have been estimated so that the expected surpluses for insurance companies and reinsurance companies are maximized.
In this paper we will focus on the discussion of the calculation of the risk of claims against the implementation of the reinsurance program in credit life insurance. The object in this topic is credit life reinsurance using the collective risk model. The reinsurance agreement method used is excess-of-loss.

Reassurance Excess-of-Loss
In the excess-of-loss method the reinsurance company accepts risks up to a certain value after going through a loss limit called the retention limit, which is suffered by the insurance company. So the reinsurance agreement in this method provides the possibility for reinsurers (reinsurance companies) not to pay claims or pay claims only limited to the excess of the insurance company retention limit (Dickson, 2016).
In excess-of-loss also specified the maximum loss that can be borne by the insurance company and above this maximum loss then the reinsurance company participates cumulatively up to the retention limit that has been set together (Klugman et al., 1990).
For payment of claims of with a retention limit of , amounting to:  Min { , } is borne by the insurance company  Max {0, − } is covered by the reinsurance company Referring Manurung (2011), if is the retention rate and is the amount of claims that must be paid by reinsurers. Mathematically the reinsurer will pay = (0, − ) where is the amount of the claim. So will be valuable: Hogg and Craig (1995), if Fz is denoted as a distribution function of then,

Collective Risk Models in a Single Period
At the beginning of the period of insurance protection, the insurer does not know how many claims will occur, and if the claim occurs, how many of the claims. Therefore it is needed to build a model that contains calculations from two sources of variability. Suppose N is the number of claims generated from the policy portfolio given over a certain period of time and X 1 is denoted as the number of claims to 1, X 2 to the number of claims to 2 and so on. According to Pramesti. (2011), if the number of collection claims is only a sum of the number of individual claims, then it can be written as follows: = 1 + 2 + ⋯ + (2) In building a model, it must start with two fundamental assumptions, namely: a) 1 , 2 , … random variables that are identical b) Random variables , 1 , 2 , … are independent The aggregation claim model that can be stated in an aggregate claim distribution is determined by the large distribution and the number of individual claims that occur. The expectations and variances of the S aggregation claim are:

Stages of Analysis
There are several stages of the process that must be done in calculating the risk of loss in credit reinsurance. The first stage is the collection of credit insurance claim data for the period of January 1, 2009 to December 31, 2011. After that the claim data will be grouped, claim data are grouped based on the amount of claim (number of claims) and the number of claims (number of claims) for the collective risk model. The amount of claim value data is continuous data grouped based on the total claim amount. The number of claims data is discrete data which is grouped based on the number of claims that occur each month (Sunandi et al., 2003).
After the two data are grouped, the exact distribution of the two data will be determined. To determine the right distribution, distribution testing is done using Easy Fit 5.5 software. To determine the validity of the data, theoretical data testing is carried out using the Kolmogorov-Smirnov test on the claim size ( ) data and the number of claims ( ), which will then build a distribution model of each of these data by referring to Siegel and Castellan (1988). Then ( ) and ( ) will be calculated from the distribution obtained using the excess-of-loss method, and ( ) and ( ) in accordance with the most appropriate distribution (Déniz-G and Ojeda-C, 2013).
The next stage is to build a claim aggregation model ( ) from the two distribution models obtained. After the aggregation claim model is obtained, a risk calculation will be performed by calculating the mean ( ) and the variance ( ) with Equations (3) and (4). This stage of the process is an analytical step carried out in this paper. The final result of this study is to obtain the most suitable distribution model for data on the number of claims and individual claims large data which will then be used to calculate the risk of aggregation claims from both data with a collective risk model using Excessof-Loss.

Distribution of Individual Claims Amount
Claims that occur for each risk are individual. Determination of the distribution model is done by matching the density function histogram with the density function curve of the continuous distribution. Matching this curve is done using EasyFit 5.5 Software. From the results of the curve matching we get the seven most suitable distributions, namely Beta, Burr, Logistic Genes, Dagum, Extreme Value Genes, Gamma Logs, Logistic Logs, Normal Logs.
Test results with EasyFit 5.5 with a 95% confidence level, the distribution received is Pearson 6 (4P), Burr, Pearson 6, Pareto, Frechet (3P), Levy (2P), and Inv. Gaussian (3P). The selected distribution has estimated parameters that do not fit the theoretical requirements. For example, for the Pareto 2 distribution, the average in this distribution is only for α> 1 and the variance must be α > 2 but the estimated parameters are = 0.50031 and =21448.0. Where α <1, so the average value and variance cannot be determined. Therefore, data transformation is needed to be able to process big data claims.

Transforming Large Data on Individual Claims
The transformation that will be used is the natural logarithmic function ( ( )). Using the Ms. program. Excel will get the result of the function ( ( )), where xi is the magnitude of every claim that occurs. The results from ( ( )) will then be used to determine the distribution that is suitable by using Easy Fit 5.5 software.

Figure 1. Histogram and Curve Functions of Large Density Transformation Claims
Based on Figure 2, seen from the comparison of the density function histogram with the density function curve, the most appropriate distribution for the claim magnitude data is the Burr distribution. Estimated parameters using the maximum likelihood method for the Burr distribution were obtained, k = 0.59842, α = 10.858 and β = 10.203. After the data is transformed, the selected distribution has estimated parameters that meet the theoretical requirements.
Furthermore, to determine a more accurate fit distribution will be tested statistically by the Kolmogorov-Smirnov test on the selected distribution, namely the Burr distribution.

Kolmogorov-Smirnov Test for Large Individual Claims
The results of the Kolmogorov-Smirnov test with a confidence level of 95% or α = 0.05 are presented in Table 1.  Table 1 because the test statistic (D) is smaller than the critical value, H 0 is accepted, which means the data is distributed Burr.

Distribution Model of the Amount of Individual Claims
Previously, there has been a transformation of claims big data, so to get the original distribution of claims large data will be transformed back to the original data. The transformation function used is the ln function, then the function that will be used to transform back to the original data is the exponent function.
From the test results it was found that the most suitable distribution for big data claims is the Burr distribution. Suppose = and have Burr distribution and have the following probability density function: Then to obtain the distribution of X, the transformation of variables will be carried out with the Jacobean method as follows: Suppose = then = exp( ) then = ( ) = exp ( ) and −1 ( ) = ln The Jacobean values of the transformation are: The opportunity density function for the random variable X is as follows:

The Risk of Large Claims with the Excess-of-Loss Method It was found that the suitable distribution for claim big data is the Burr distribution so that in the calculation of [ ] and
[ ] the opportunity density function used is the Burr probability density function that has been transformed.
First the retention limit (M) will be determined, the retention limit in this study will be determined by subtracting the mean by the standard deviation of the large distribution of claims, namely the Burr distribution. = 1,368.619. The retention limit is obtained by reducing the expectation value by the standard deviation, so that it is obtained: = 89,980.02. From equation (1) and by entering the opportunity density function in equation (5) we will get a claim big distribution model: These equations are very difficult to do with ordinary analytics, so numerical integrals are needed to solve these equations. To calculate the numerical integrals above, this research uses 1/3 Simpson method algorithm with the help of Delphi software. To calculate the numerical integral, the boundary of its integration must be a certain value, so that in this integration process the upper limit of the integration will be set at IDR 200,000,000 taking into account the maximum claim amount that occurred on January 1, 2009 -December 31, 2010. From the calculation using the Delphi software with 1/3 Simpson method integration algorithm, obtained [ ] = 1.137.820,52005763with an error approaching -0.82168. From equation (6)

Data Distribution Number of Claims
The stages of analysis in determining the distribution of data on the number of claims are the same as the stages of analysis in determining the amount of claims data. The number of claims data is discrete data, therefore the possible distribution is the Bernoulli, Binomial, Geometric, Logarithmic, Negative Binomial, Poisson, and other distributions.
Furthermore, the determination of the distribution model for many claims is done using EasyFit 5.5 software.  Figure 3, the results of curve matching using EasyFit 5.5 software, obtained 2 candidate distribution models that are suitable for many claims, namely Poisson distribution and Negative Binomial distribution. Judging from the results of the comparison of the function histogram and its density function curve the most appropriate is the Negative Binomial distribution. Estimated parameters obtained using the maximum likelihood method for the Negative Binomial distribution, namely = 11and = 0.23521.
Matching through the shape of a curve is not enough to determine the most suitable distribution model, so statistical analysis is required. The Kolmogorov-Smirnov test will be used in statistical analysis in the Negative Binomial distribution.

Kolmogorov-Smirnov Test Number of Claims
To determine the choice of the most appropriate density function of the two distribution candidates, one of the goodness of fit tests is the Kolmogorov-Smirnov test. The results of the Kolmogorov-Smirnov test with α = 5% for the many claims data are presented in Table 2.

Risk of Number of Claims
From the parameters that have been obtained = 11and = 0.23521, where = 1 − will get the expected value as follows:

Calculation of Risk using the Excess-of-Loss Method
From the average S model and the S variance, it is possible to calculate the risks that might arise in the company. The magnitude of the risk with the collective risk model in insurance can be seen from the average and variance of aggregation claims. Next, S expectation is obtained: [

Conclussion
Based on the results of data processing, the distribution model for the number of claims data is the Negative Binomial distribution and for the amount of claims data distributed with the probability density function of the ln(x) transformation results from the Burr distribution. The aggregation claim model that is formed from the distribution of the number of claims and the distribution of the amount of individual claims is the Negative Binomial compound model. Based on the results of data processing, the amount of aggregation claims expectations that occur in credit life reinsurance using the excess-of-loss method is IDR40,696,106.93 with a variance of. This expectation value is the average value of claims occurring to the company during the period January 1, 2017 -December 31, 2019, while the value of the variance shows the extent of risks that may be faced by the company during the insurance period.