psKINETIC

Emotional Intelligence Yes, Rational Decision-Making No

It is accepted that machines/software perform better than humans when solving repetitive problems and performing rote tasks. So far, CEOs and CTOs have been assessing new technologies in terms of efficiency (more streamlined services, fewer people). However, we believe that the real potential of insurance Automation and Machine Learning is found in making better and more consistent business decisions.

It has been estimated that the insurance industry could save up to US$400 billion (Autonomous Net, 2018) should its Machine Learning/ AI potential be realised. However, unless traditional perceptions are challenged, the real benefits afforded by AI will never be gained in terms of higher margins, better CX and lower risks.

Due to advances in the behavioural sciences, the fallibility of human decision making has become increasingly clear. Yet, decision making is key to processes such as underwriting and claims processing.

Experts know their stuff though? New research suggests that professionals with expertise are as poor decision makers as less experienced staff.  A counter-intuitive finding from the world of chess found that that AI aided amateurs performed on average better than AI aided professionals (Mathias, 2017).  Your ‘experts’ could cost you twice over, they are more expensive, and they may be less good at leveraging (or not humble enough to use) new technologies.

While most research into decision making has focussed on innate human biases, the phenomena of random errors – which Nobel Prize winner Daniel Kahneman calls ‘noise’ – is a problem which he argues algorithms (even simple rules) and AI can more effectively resolve. Noise can be found both inter- and intra-subjects.  In a series of experiments, expert radiologists gave different diagnoses of the same x-ray 20% of times, whilst professional wine tasters rarely agreed with themselves when tasting the same wine twice. Psychological insights such as these can have very real commercial applications.

In the world of insurance, people who have the same role in an organisation and make similar decisions – e.g. underwriters – are assumed to be interchangeable. However, in an experiment which refuted this, Kahneman found over 50% variability in outcomes across pairs of underwriters. In practice: when assessing a case, one underwriter may conclude the premium should be £7,000 and the other £11,000 – a potentially significant difference to your bottom line in either lost business or mis-priced risk!

It is thought that the fallibility of human decision making might stem in part from our brain’s inability to compute more than three to five variables at once. When the conceptual complexity of problems is manipulated to investigate the human limits of information processing capacity, results show a significant decline in accuracy and speed of problem-solving from three-way to four-way interactions (Halford, et al. 2005). This limitation is extremely significant; at the core of insurance is the ability to gauge the effects of multiple risk factors. Where there is human judgement, there will be bias and noise.

From the 1990s, research started emerging suggesting that simple, rules-based decision making employed by algorithms is more effective in terms of accuracy and consistency than expert decision makers (Meehl, et al. 1989).  Given the explosion of Machine Learning technologies, including the potential to integrate data from different sources and evaluate the effects of many variables in ways which the human mind cannot, the machine is clearly poised to win the race to optimise decision making.

Without removing reliance of the human ‘expert’ from the decision-making process and implementing Automation and AI, insurance companies will fall behind. But, for at least the next decade, these technologies are limited by their inability to recognise basic emotions/formulate an emotional response, meaning that humans will continue to possess a superior emotional intelligence in the coming years. So take this opportunity to focus investment of resources into human-centric activities such as client interaction and people management, and use the human-AI interface to your advantage.

Conclusions

When designing your processes or setting out your Automation or Machine Learning strategy, remember that human decision making is fallible, and prone to bias and noise. Do not prioritise investment in training humans to make better decisions, but instead, apply your people where they can leverage empathy and emotional intelligence…of the genuine, not artificial type.

When considering and the design of your delivery processes, the risk is too much emphasis on reducing visible costs (people) and not enough focus on illuminating noise to make better, more profitable decisions.

Better decisions are better for staff, better for shareholders and better for customers.

AUTHOR: Tim Hatzis | Account Executive, Insurance | psKINETIC

READ MORE about psKINETIC’s Insurance Software Solutions

Works Cited

Halford, B. M. B., 2005. How many variables can humans process?. Journal of Psychological Science, 1(16), pp. 70-76.

Mathias, J., 2017. Bias and Noise: Daniel Kahneman on Errors in Decision-Making. [Online]
Available at: https://medium.com/@natematias/bias-and-noise-daniel-kahneman-onerrors-in-decision-making-6bc844ff5194
[Accessed 5th August 2019].

Meehl, F. D., 1989. Clinical Versus Actuarial Judgment. Science, pp. 1668-1674.

Net, A., 2018. How is AI disrupting the Banking Industry, s.l.: s.n.