Liz Booth examines how deep fake technology and Covid-19 is changing the nature of fraudulent claims
The profession’s relentless pursuit of insurance cheats to protect honest customers is delivering results, according to the Association of British Insurers (ABI).
Its most recent fraud figures were published last year, as the Covid-19 pandemic appeared to ease. However, even then the profession was being warned of the risk of increasing numbers of fraudulent attempts, relating to Covid-19.
The ABI figures showed, that even without the latest blow to the economy, there were the equivalent of nearly 300 fraudulent claims and more than 2,000 dishonest applications being detected every day.
However, while insurers are seeing more crash-for-cash scammers being jailed – and even earlier this year an insurance employee being given eight months in jail (suspended) for selling claimants’ details – this could be nothing in comparison to the threat ahead.
Many insurers use automated systems to weed out potentially fraudulent claims as well as voice technology to detect stress. These have been heralded as major tools for insurers in their fight against fraud.
However, as we all sadly know, the criminals are always one step ahead – and now a cyber analytics specialist is warning that the use of ‘deep fake’ video and audio technologies could become a major cyber threat to businesses within the next two years.
In a new report, Social Engineering: Blurring Reality and Fake, CyberCube says the ability to create realistic audio and video fakes using AI and machine learning has grown steadily. In addition, recent technological advances and the increased dependence of businesses on video-based communication have accelerated developments.
“Because of the increasing number of video and audio samples of business people now accessible online – in part due to the pandemic – cybercriminals have a large supply of data from which to build photo-realistic simulations of individuals, which can then be used to influence and manipulate people,” it warns.
The use of ‘deep fake’ video and audio technologies could become a major cyber threat to businesses within the next two years
“In addition, ‘mouth mapping’ – a technology created by the University of Washington – can be used to mimic the movement of the human mouth during speech with extreme accuracy. This complements existing deep fake video and audio technologies.”
The report’s author, CyberCube’s head of cybersecurity strategy Darren Thomson, says: “As the availability of personal information increases online, criminals are investing in technology to exploit this trend.
“Imagine a scenario in which a video of Elon Musk giving insider trading tips goes viral – only it’s not the real Elon Musk. Or a politician announces a new policy in a video clip, but once again, it’s not real. It’s only a matter of time before criminals apply the same technique to businesses and wealthy private individuals. It could be as simple as a faked voicemail from a senior manager instructing staff to make a fraudulent payment or move funds to an account set up by a hacker.”
The report also examines the growing use of traditional social engineering techniques – exploiting human vulnerabilities to gain access to personal information and protection systems.
One facet of this is social profiling, the technique of assembling the information necessary to create a fake identity for a target individual based on information available online or from physical sources such as rubbish or stolen medical records.
According to the report, the blurring of domestic and business IT systems created by the pandemic, combined with the growing use of online platforms, is making social engineering easier for criminals. In addition, AI technology is making it possible to create social profiles at scale.
The report warns insurers that there is little they can do to combat the development of deep fake technologies but stresses that risk selection will become increasingly important for cyber underwriters.
Mr Thomson says: “There is no silver bullet that will translate into zero losses. However, underwriters should still try to understand how a given risk stacks up to information security frameworks. Training employees to be prepared for deep fake attacks will also be important.”
Insurers should also consider the potential of deep fake technology to create large losses, as it could be used in an attempt to destabilise a political system or a financial market.
Liz Booth is contributing editor of The Journal
Image Credit | Shutterstock
- 107,000 fraudulent insurance claims worth £1.2bn uncovered by insurers in 2019. That is a new scam uncovered every five minutes – 300 a day.
- Frauds worth £3.3m detected every day.
- A bodybuilder, police officer and a trainee GP among the cheats exposed.
Source: Association of British Insurers
The Insurance Fraud Bureau say that currently there is at least one insurance scam taking place every minute, costing at least £3bn a year.
Delivery driver job ads: Fraudsters are using recruitment to phish for personal information. They tell job seekers that their application has been successful and then ask for personal details, including insurance policy details, which are then used for crash for cash scams.
Compensation scams: The fraudster contacts someone out of the blue and tells them they are due compensation. If convinced, personal details are handed over and the fraudster will steal their identity or bank funds. The victim is often also encouraged to make a fraudulent insurance claim.
Ghost broker scams: a fraudster poses as an insurance provider to target people who struggle financially with unrealistically cheap fraudulent insurance deals. These fraudsters are known for selling fake car insurance. The IFB reports a doubling in this kind of scam in recent years.