Datafication, as we understand the term, means turning the currently invisible process of risk management into data. Datafication contrasts with digitization, which is the more common practice of making currently manual processes more efficient with technology.
For example, in a hyper competitive insurance world, efficiency is prioritized, so digitization is all the rage.
But it is the difference in the objectives of the two processes that is more important than the difference between the processes. The objective of digitization is greater efficiency but not a fundamentally different or better process; the objective of datafication is to take something and change it into something that is fundamentally different and better.
Datafication will really matter to risk managers and insurers for a number of reasons:
- The core resource of risk management is data; the more and better data there is about a risk, the less ‘risky’ it is and the better it can be managed.
- As more data = less risk and better risk management, more risk datafication is inevitable. Risk managers and insurers both need to figure out the most effective ways of developing reliable risk and risk management data.
- Lack of data is often cited as the number one reason enterprise-wide risk management (ERM) programs fail; since ERM has emerged as best risk management practice, and most organizations will never be big enough to produce enough of their own data to make ERM work easily, broad risk datafication could potentially bring the benefits or ERM to far more organizations than is currently possible.
- Datafication makes the most sophisticated prediction models currently in use better; for example, datafication will increase the number of variables available for a multi-variate regression model exponentially.
- But datafication at scale will potentially lead to the possibility for AI based deep learning to take the bias out of regression models and therefore potentially out of risk management decision making altogether. In deep learning, rather than humans guessing at which variables might be predictive and testing with models, models identify the variables, and combinations of variables, that actually are predictive.
- Some insurers reasonably argue there are insufficient economic benefits to be gained by investing in improved prediction capabilities because portfolio theory already eliminates enough of the volatility caused by the inherent inaccuracy of their current prediction capabilities. Others think underwriting accuracy is a source of competitive advantage. We agree with the latter opinion and, if more datafication is inevitable anyway, the earlier insurers learn how to use emerging datafication techniques to their advantage the better.
- Even if insurers are right about the economics of prediction accuracy as far as they are concerned, the same does not apply to risk managers. Risk managers need as accurate prediction information as they can get to avoid potentially fatal losses that might, for example, only impair an insurance portfolio by a few %. As we have said before, we think the insurers who enable the highest prediction accuracy for their customers, as well as themselves, will be the most successful insurers in the future.
We expect increased datafication will be the foundation on which closely aligned risk management and insurance will be built in future. Apart from anything else, and the 8th reason datafication will be so important, we think one of its easier to achieve outcomes will be to eliminate the asymmetries of information that make insurance so inefficient.
So, the big issue isn’t that data matters; that much is obvious. The real challenge for the hard to manage, very human influenced risks we deal with, is what to datafy and how… More on that another time.