With AI already playing a large role in insurtech, Melissa Collett highlights the work the profession is doing to ensure it is a force for good
Judging from recent developments in insurtech, a future where artificial intelligence (AI) can think and act like human beings has arrived. AI is already being deployed by insurance startups and incumbents alike to transform the way we interact with insurance and risk.
There is much debate around whether AI in insurance is a force for good or could lead to negative outcomes for customers. For instance, AI could be used to make dramatically safer roads and make more accurate cancer diagnoses. Or it could lead to widespread unemployment as work tasks are increasingly done by robots.
CII'S CODE OF DIGITAL ETHICS
Ensuring ethical considerations are borne in mind when adopting and developing AI for insurance is key. That is why in 2018, the CII formed a Digital Ethics Forum, consisting of experts in digital insurance, to produce guidance for professionals on how ethical standards should apply in a digital age. Published in July 2019, Digital Ethics: A Companion to the Code of Ethics, is the first joint industry guidance on this topic between the CII, the Association of British Insurers and the British Insurance Brokers' Association.
The Digital Ethics Forum also produced the Digital Ethics Companion -- A Practical Guide, to look at examples of how the principles could apply in practice. It offers examples on how to deploy AI and data responsibly, including abiding by the spirit of the law and not just the letter, and anticipating unintended consequences when using data.
In addition to the Digital Ethics Forum, the CII has sought to engage with key stakeholders around the issues of ethics and AI in insurance, in particular the new Centre for Data Ethics and Innovation (CDEI). This is an independent advisory body set up by the UK government to identify measures needed to maximise the benefits of AI and data-driven technology for our society and economy. The CII gave input into a CDEI paper entitled AI and Personal Insurance. Published in September 2019, it looks at the key ethical issues around AI in insurance and suggests measures that could be taken to mitigate ethical risks.
AREAS OF CHANGE
According to the paper, the key areas where AI could affect insurance are onboarding, pricing and claims management:
- Onboarding could be affected by speeding up the ability to provide quotes;
- Pricing could be affected by making more precise and personalised risk assessments; and
- Claims management could be affected by identifying fraudulent behaviour (or potential behaviour).
In addition, AI could be used to advise customers on how to avoid potential risks. For instance, some insurance companies are already using fitness trackers or smart devices to steer policyholder behaviour and alert them to potential risks to their health and in the home. This could fundamentally change the nature of insurance from rectifying damage to preventing it from occurring.
Ultimately, more transparency is needed to avoid customer data being used in a way that feels 'creepy' to customers and reduces trust
DOWNSIDES OF AI
While AI could be a boon for policyholders, with increasing automation leading to a reduction of premiums and less fraud, as well as opening up insurance to new groups previously classed as risky (such as younger drivers, through the use of telematics), there are those that fear AI could lead to customer harm and unfair discrimination. For instance, a widely publicised investigation by The Sun newspaper found price discrimination, fuelled by AI, against applicants with ethnic minority names. Another intermediary was forced to drop its intention to underwrite using public profiles on Facebook, after a newspaper exposÃ© created a public backlash.
Critics point to three key areas for consumer detriment. Firstly, the collection and sharing of large data troves could impinge on privacy if done without the express consent of the customer. Secondly, hyper-personalised risk assessments could leave some individuals uninsurable by revealing previously unseen indicators of risk. And thirdly, new forms of nudging could occur where insurers alter the behaviour of customers in a way that could be viewed as intrusive.
RESPONSIBLE USE OF AI
The challenge for the profession will be to find common ground on what constitutes ethical use of AI. The CDEI cited the CII's digital code as a step in this direction. The CDEI has also called for more public engagement on acceptable use of data, as well as potential government intervention to ensure people have adequate access to insurance. It suggests the following measures are worthy of consideration by insurers: undertake data discrimination audits; review third-party data suppliers; and make data privacy notices more accessible. It also suggests that in addition to adhering to data protection standards, insurers should give customers the power to port risk profiles and establish clear lines of accountability for data and pricing.
Ultimately, more transparency is needed to avoid customer data being used in a way that feels 'creepy' to customers and reduces trust. For example, Aviva recently launched a Customer Data Charter and Axa has a Data Privacy Advisory Panel, both of which should help. As the fourth-largest insurance market in the world, UK firms have the power to influence the terms by which insurers across the world engage in AI and other data-driven technology; and far more than just UK customers stand to benefit.
So, before the robots take over, let's pause consider how we can make them more human and ethical.
Melissa Collett is the professional standards director at the CII and a founder of the CII's Digital Ethics Forum