C3P-Oh my! The Robots are on Social Media

An editorial in PropertyCasualty360 makes the argument that RPA solutions are the next stage in claims fraud detection, utilizing intelligent automation to investigate social media sites to find evidence to prove (or disprove) the substance of a claim. Insurers have long used a human workforce to uncover fraudulent filings, and social media has been a tool of self-incrimination since its emergence. There are many stories of adjusters finding photos of someone water skiing while on disability for a back injury or a family member bragging about tricking the insurance company. Since such a manual search on every claim would be costly and time-consuming, insurers tend to reserve this approach for claims that have raised an investigator’s suspicions in other ways, via a human review or a triggered predictive algorithm. Automating a social media review using RPA-enabled technology could potentially increase the percent of claims that receive this external validation.

I find this idea intriguing and a little controversial. It’s both a natural extension of RPA solutions and also a violation of the expectations on how insurers are currently approaching the technology.

Insurer RPA Use Cases Are All About Internal Automation

So far, RPA has been primarily used by insurers for tactical integration between systems, replacing human data reentry with screen-scraping algorithms. This is sometimes referred to as “swivel-chair automation” because of the way it simulates a single user turning back and forth between two system screens. While RPA vendors are eager to build a more strategic relationship with insurers, this simpler, single-purpose use case is the reality of how RPA enters the industry.

The critical difference between this approach and the social media investigation is that swivel-chair automation connects two or more internal insurer systems. This means the insurer implements RPA to learn systems that are fully under the insurer’s control. No updates or code edits happen to those RPA-automated systems without the insurer planning and preparing for the changes.

The use of RPA to automate social media investigations means implementing algorithms against external systems. While this is not impossible (search engine spiders are constantly scanning the public web) it is significantly more fragile. Any scripts written against a site or system the insurer does not control might break as soon as the third-party makes a change. The insurer either needs to constantly be prepared for their RPA process to fail and rapidly correct it or rely on a vendor to do so.

While this difference does not invalidate the use of RPA to automate integrations to external sites and systems, it does introduce a new variable, and insurers are likely to want to stick to internal use cases until they are comfortable with the entire space of technology.

Is it Legal?

There are already questions about terms of use for insurers investigating social media feeds for information on their applicants or claimants, and this doesn’t necessarily change that. Insurers need to make sure they aren’t running up against any privacy laws or usage rules when they start scraping social media data into their own databases. But RPA automation complicates this even further. It’s possible that regulatory bodies or third-party social media proprietors will have a more accepting stance to insurers who look for information on claimants when a human being at that insurer already has a credible suspicion of fraud. When the process is instead automated and therefore increased, broadening the target group to all claimants, the rules may change.

Even If It’s Legal, Is It Ethical?

There will almost certainly be a big difference in how the public responds to an insurer checking into social media sites for potentially fraudulent claims compared to how the public feels about an insurer automating social media data collection for all claimants. The first approach already creates some raised eyebrows regarding privacy but is generally treated like a reasonable scenario. But no one wants to feel that filing a claim—any claim—with their insurance company means that the company will immediately begin to spy on their online actions, regardless of cause. Insurers will therefore still need to bundle any fraud detection RPA with other algorithms or human review to judge potential fraud, otherwise they risk the wrath of alienating their policyholders.

Add new comment

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
3 + 1 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.

How can we help?

If you have a question specific to your industry, speak with an expert.  Call us today to learn about the benefits of becoming a client.

Talk to an Expert

Receive email updates relevant to you.  Subscribe to entire practices or to selected topics within
practices.

Get Email Updates