< News | |

AN EYE ON AI

AN EYE ON AI

Duncan Minty provides a window into the FCA’s thinking on artificial intelligence

Around five years ago, the UK Financial Conduct Authority (FCA) realised that it needed to really get to grips with big data and artificial intelligence (AI). It had some data scientists working in what was to become the Behavioural Economics and Data Science Unit (BDU). Their work had produced some eye-catching results, described as “the regulatory equivalent of the leap from black and white to glorious technicolour”. But how was the FCA to scale this up?

The FCA’s response was clever. It began building a partnership with the Alan Turing Institute (ATI), the UK’s national institute for data science and artificial intelligence. This provided the FCA with access to top data scientists from across 13 UK universities. And the FCA then used that partnership to attract top talent to its growing BDU team, with the best data scientist recruits being given a Visiting Fellow position at the ATI.

The FCA then went on to explore a wide range of financial conduct issues with the BDU’s new expertise. Two examples were models to predict the probability and location of an adviser mis-selling financial products and the pricing of personal lines insurance products.

This data science capability is surprising many insurance professionals, used to a more prosaic form of engagement with the regulator. There would usually be little sign there of algorithmic expertise. Yet, is that surprising? Policyholders do not have contact with insurers’ data scientists. Why should that regulatory expertise be handled differently?

What many insurers experienced though were calls for big datasets, particularly in relation to the personal lines pricing review. The regulator’s new algorithmic models needed data – and this was the call for feeding time.

The FCA/ATI partnership has obviously proved fruitful, as in July 2019, the two organisations went public about their partnership, at a conference about the policy and scientific implications of  ‘AI Ethics in Financial Services’. A joint programme of work was announced, but perhaps of equal importance, a signal was sent to the market that the regulator was taking data and ethics seriously and that boards should do so too. This is what the FCA’s Christopher Woolard had to say: “If firms are deploying AI and machine learning, they need to ensure they have a solid understanding of the technology and the governance around it. This is true of any new product or service but will be especially pertinent when considering ethical questions around data. We want to see boards asking themselves, ‘What is the worst thing that can go wrong?’ and providing mitigations against those risks.”

If firms are deploying AI and machine learning, they need to ensure they have a solid understanding of the technology and the governance around it

MARKET IMPACT

So, what can insurance markets expect from all this? Certainly, a focus on the transparency and explainability of the artificial intelligence tools being used by firms. Guidance on this will be published in 2020 and firms should expect to see it presented firmly within the context of the Senior Managers & Certification Regime.

The guidance is expected to cover both corporate and social accountability, as well as explainability to boards, customers and the significant stakeholders in between. On paper, this is all pretty straightforward. The challenge for senior management function holders is to put it into practise and crank out the outcomes.

Insurance professionals have sometimes complained about the FCA being big on requirements, but short on advice on how to deliver them. AI ethics is not going to be different. However, a window into the FCA’s thinking could come from some of the academic papers on AI accountability and explainability written by ATI people such as Luciano Floridi, Sandra Wachter, Chris Russell and Brent Mittelstadt. These papers are detailed, but influential.

One further point emphasised by the FCA was around how artificial intelligence might be used (inadvertently or otherwise) to facilitate anti-competitive behaviour by firms. So, while the FCA will firmly support collaboration to address, say, digital anti-money laundering projects at the market level, it will expect to see the data architecture protecting against anti-competitive behaviours. The ATI’s expertise will inform that FCA scrutiny.

We know that AI is going to be big in insurance. We should expect it to be big in the regulation of insurance too.


CONTROLLING DIGITAL RISK

Four things insurance firms should be doing:

  • Have a clear and effective governance structure for your digital projects, products and services;
  • Know the ethical risks that arise from how the firm is using data and analytics;
  • Control for those risks through a mix of existing and new policies and procedures;
  • Have outcomes evidence to show how well those controls are working.

Duncan Minty is ethics consultant at Duncan Minty consultancy

 

Share

Related articles

Trust: the glue of life

Trust: the glue of life

Zurich’s Charles Bush discusses the rebuilding of customer trust, a subject he spoke about at the recent European Intelligent InsurTECH Conference.

Workplace woes

Workplace woes

Liz Booth examines recent court rulings on workplace claims and their impact on the sector

A powerful solution

A powerful solution

Faddy technology or a useful crime deterrent? We look at the increasing uptake of SmartWater…