Dominic Houlder, Luke James and Harry Korine consider strategic choice in health insurance and the ethics of using artificial intelligence
Health insurance is a prime example of an industry that faces radical transformation through AI. AI is already being used in preauthorisation of treatments (to lower costs and improve customer experience) and in claims fraud modelling, to detect providers billing inappropriately. In these instances, AI is optimising existing practices.
But the potential game-changer will be the use of AI to enable health insurers to individualise the pricing of risk. The ability to use AI to individualize the pricing of risk and to do so dynamically based on real-time monitoring will sharpen the separation between luxury and need, with poorer risks potentially priced out of all but the basic care required by law or provided by the state.
This development will not take place overnight, but it is also not decades away. Along the path of transition, new players with business models built around intelligence will enter and existing players will be compelled to make difficult strategic choices about competitive positioning, with far-reaching ethical implications about the affordability and coverage of health insurance. In the absence of an ethical code for commercial users, business leaders have the incentive to pursue the profit opportunities offered by AI singlemindedly, blind to the social costs and the potential for creating social value beyond the economics of their own organisations.
The use of AI in health insurance
AI permits much greater individualisation and discrimination than was previously possible, down to the level of pricing individual risk on the basis of medical history, genetic profiles, proteomic analyses, real-time health-tracking apps, shopping habits and even data scrapes from social media – the wider healthcare AI ecosystem. Based on this kind of data, AI can play a big role in how actuarial decisions are framed and made, such as weighting of factors and negotiating prices, an area that until now has remained more art than science.
AI is already extensively used in motor insurance to price individual risk and, increasingly, to reduce risk by incentivising good driving through telematics. One perspective on the slow adoption of AI in health insurance is the desire to maintain positive customer relations in an environment of trust. Insurers already have a wealth of data about individual risk through claims history, which could be used to predict future claims and price accordingly. But hitherto, health insurers have avoided using existing data in this way on account of customer expectations of fair and reasonable treatment and the danger to their reputation. Nonetheless, AI is already becoming a part of risk management: identifying which claims are likely to occur when, based on claims history.
The evolving economy of health insurance
Leaving the regulator and the customer aside for the moment, we can identify three broad categories of economic actor contributing to the pricing of risk: intelligence producers, intelligence aggregators, and intelligence users, with some actors active in multiple categories. Looking at three categories in detail:
Intelligence producers – economic actors that produce the intelligence that risk pricing can be based upon, such as genetic profilers, proteomic evaluators, health-tracking apps etc, but also data giants such as Amazon or Google, which can compile large amounts of health-relevant data. Although rarely equipped to do so, healthcare providers could also produce intelligence.
Intelligence aggregators – economic actors that are in a position to pull together and sell or use health relevant data for the purpose of risk pricing; here again, Amazon and Google figure prominently, at least in terms of capability and potential impact, but also Tempus.
Intelligence users – starting with traditional health insurance companies (with a store of claims history data), as well as health management organisations, but extending to any economic actor legally permitted to engage in risk pricing for the purpose of selling health insurance.
At first glance, intelligence producers do not appear to face significant ethical questions in their choice of strategy – after all, companies such as 23andMe or Applied Biomics merely produce the intelligence for others to work with. However, once clear predictive relationships have been demonstrated, the intelligence they produce can be used to fuel the individualisation of risk pricing and accelerate the dystopian race to the bottom we have sketched. Thus, to whom they sell their products and what conditions they attach to the use of their products are ethical questions that decision-makers at companies such as these will need to ask themselves.
Similar reasoning applies to intelligence aggregators, if they decide not to use the data they collect for themselves but merely to sell it on as a product to intelligence users such as health insurance companies. As sellers, they will need to ask who they are willing to sell to, and if profit will be the only criterion guiding the choice of partner. If, on the other hand, intelligence aggregators decide to enter the business of health insurance, they will need to think about how the intelligence they have put together will be used.
At the interface between the health insurance ecosystem and the customer, stand intelligence users, which will ultimately draw on data and AI algorithms to determine how risk is priced and how customers are served. If they focus exclusively on capturing value from the use of artificial intelligence to individualise the price of risk, it will lead to a race to the bottom. In such a scenario, companies that can play all along the health insurance value chain and are not encumbered by governance structures based on solidarity and value systems rooted in the professional code of the physician, are particularly well placed to not only lead the race but win it outright.
Other intelligence users, particularly those with a long history in healthcare, may build on their mix of capabilities and their backbone of values to chart a course of maintaining some degree of solidarity. Thus, for an integrated healthcare company, for example, AI provides an unprecedented opportunity to create value by working with a customer during his or her life to improve health, actively reduce modifiable future risk, reward positive behaviours and ultimately prevent disease from developing in the first place, with the associated customer and healthcare cost benefits. Much more than in the past, health-focused insurers will have to work with customers to educate and counsel them about the possibilities and the pitfalls the new technology offers.
Customers, particularly younger customers, may be more willing to share personal data with the health insurance ecosystem if there are clear benefits to doing so, such as better long-term health and/or cheaper prices. Doctors play a critical role in building and maintaining customer (patient) trust. Regulators, for their part, hold multiple levers: the extent of basic care coverage, the reach of data privacy laws, the scope of legal use of new technologies such as genomics and proteomics, and the enforcement of anti-discrimination policies in health insurance. Where health insurance is private, that is in countries without national health insurance and in segments of the market that are considered luxury and hence not covered by national health insurance, the potential for disruption by AI is largest: customers will be incentivised by novel offers – for example predictive screening or dynamic pricing – and regulators will be slow to respond.
New, competing ecosystems
Given the decentralised nature of the developments we have described and in light of differing systems of national regulation and healthcare provision, one can imagine multiple health insurance ecosystems emerging from the transformative impact of AI. One way to think about it is that these ecosystems may be organised around the visions of the members of the ecosystem, meaning value creation through improved long-term patient health and inclusion, versus value capture through optimised risk pricing and exclusion. Such a bifurcation would imply that players make explicit ethical choices about what risk pricing strategy to pursue and who to partner with.
As the failure of Haven, the now-disbanded healthcare joint venture between Amazon, JP Morgan and Berkshire Hathaway, indicates, getting different types of actors to work together is far from easy. In any case, Haven is only one example of recent efforts by Amazon, Google, Apple and Facebook to enter the (health) insurance space and partner with insurers. These moves represent the beginnings of alternative health insurance ecosystems. Having links with intelligence producers as well as with intelligence aggregators, and facing the customer, health insurers are well placed to take a position of leadership, but it is unclear whether they have the partnering skills to effectively take charge of an ecosystem. If they do not succeed in growing ecosystems around their key competencies, traditional health insurers risk being disintermediated or commoditised.
Although we have focused on the health insurance sector in this paper, it is worth noting that other insurance sectors such as life and automotive and other fields where risk pricing is central, such as credit, face similar levels of upheaval and similar dystopian outcomes, if AI-enabled individualised risk pricing combined with advancing technology comes to prevail. To the extent that pooling and some degree of social solidarity also characterise these sectors, users of AI and their suppliers will face ethical dilemmas comparable to those we have described for health insurance, but without the professional moral code of healthcare to guide them.
As in the case of health insurance, where poorer risks (such as the elderly, those with pre-existing conditions, or those with high-risk genomic/proteomic profiles) may not be able to obtain cover and have to resort to government care, the state may also have to take on a much larger role in these sectors, as a lender of last resort for consumers who are priced out of the market.
Dominic Houlder is Adjunct Professor of Strategy at the London Business School.
Luke James is Group Medical Director, Provision Standards and Outcomes at BUPA.
Harry Korine teaches Corporate Governance at the London Business School and the Hochschule St.Gallen and Global Strategy at INSEAD in Fontainebleau and Singapore.