Artificial Intelligence & Machine Learning , Healthcare , Industry Specific

Lawsuit: Health Insurer's AI Tool 'Illegally' Denies Claims

Plaintiffs Say UnitedHealthcare Algorithm Rejects Coverage for Elderly Patients
Lawsuit: Health Insurer's AI Tool 'Illegally' Denies Claims
Image: UnitedHealth Group

The estates of two deceased UnitedHealthcare Medicare Advantage policyholders say the insurance company is using an AI tool to illegally deny necessary coverage for post-acute care, including skilled nursing and home healthcare, to elderly plan members, according to a proposed class action lawsuit filed in the U.S. District Court for the District of Minnesota this week.

See Also: Winning New Generations with Personalized Identity Safety

The lawsuit filed on Tuesday in a Minnesota federal court alleges that the insurance giant uses a tool called naviHealth, or the nH Predict AI Model, to deny medically needed coverage to plan members.

The algorithm for nH Predict determines Medicare Advantage patients' coverage criteria in post-acute care settings with "rigid and unrealistic predictions" for recovery, the lawsuit alleges.

UnitedHealth Group, along with its UnitedHealthcare and NaviHealth subsidiaries, are all named defendants in the lawsuit, which was filed by the estates of two late Medicare Advantage Plan members, Gene Lokken and Dale Henry Tetzloff, who were both allegedly denied certain post-acute care coverage by the insurer.

The lawsuit claims the company's use of the nH Predict AI Model directs UnitedHealthcare medical review employees to prematurely stop covering care without considering an individual patient's needs. The use of the tool to deny the members' post-acute coverage is "systematic, illegal, malicious, and oppressive," alleges the lawsuit.

UnitedHealth uses the nH Predict tool for denying claims to save substantial money that would otherwise be spent by the company to cover medically needed post-acute care for Medicare Advantage policyholders, as well as the labor costs and time associated with conducting "an individualized, manual review of each of its insured's claims," the lawsuit says.

The insurer uses the nH Predict tool "to aggressively deny coverage because they know they will not be held accountable for wrongful denials," as most healthcare members forgo an appeal process for their denials of coverage, the lawsuit alleges.

The plaintiffs assert a long list of state and federal "insurance bad faith" violations, breach of contract and an assortment other claims against UnitedHealthcare and its subsidiaries.

The litigation seeks actual, statutory, punitive and other monetary damages, plus an injunctive order for UnitedHealth to discontinue its allegedly improper and unlawful claim handling practices.

UnitedHealth Group Statement

UnitedHealth Group, in a statement to Information Security Media Group, disputed the lawsuit's claims. "The naviHealth predict tool is not used to make coverage determinations," the company said.

"The tool is used as a guide to help us inform providers, families and other caregivers about what sort of assistance and care the patient may need both in the facility and after returning home," it said.

"Coverage decisions are based on the Centers for Medicare and Medicaid Services' coverage criteria and the terms of the member's plan. This lawsuit has no merit, and we will defend ourselves vigorously."

Growing Scrutiny

The use of AI tools and other controversial practices by UnitedHealthcare and other insurance companies in their Medicare Advantage Plan coverage determinations also have come under recent scrutiny by Congress.

At a hearing in May, the Senate Homeland Security and Governmental Affairs Committee's subcommittee on investigations examined healthcare coverage denials and delays by Medicare Advantage health plans, including the use of AI tools such as UnitedHealthcare's naviHealth.

Some experts say that the expanding use of AI in healthcare presents enormous promise - but also substantial risk.

"The three biggest land mines for healthcare and insurers are adverse patient outcomes, discrimination and legal liability - whether through a medical malpractice suit, a False Claims Act case or a class action," said regulatory attorney Rachel Rose.

"Manipulated or skewed algorithms, which have discriminatory, data privacy and adverse patient outcomes - both clinical and financial - will be a focus of both regulators and civil attorneys."

Whether it is a diagnosis, coverage determination or some other clinical decision, "if the information going into the algorithm is skewed, then the outcome will be skewed," Rose said. Human oversight of AI is critical, she said.

The White House and some government agencies, such as the Federal Trade Commission, also have emphasized the importance of having a "human being check factor" in the use of AI, she said (see: Biden's Executive Order on AI: What's in It for Healthcare?).

"The bottom line is that neither human beings nor generative AI are going away any time soon. So having safeguards in place and working together throughout the process and use of AI is critical."

The controversy around health insurers use of AI tools to allegedly deny certain coverage to some health plan members shows a particular need for carefully crafted regulatory framework for AI, some experts say.

"Any tool, including AI, can be put to beneficial or improper and illegal purposes," said attorney Steven Teppler, partner and chief cybersecurity legal officer at law firm Mandelbaum Barrett PC.

"Self-regulation for these tools in healthcare poses even greater risk than in the financial arena - consider what’s happened in the cryptocurrency arena," he said. "Some degree of regulation-imposed guardrails are needed to minimize the possibility of what, if true, is alleged in the complaint."


About the Author

Marianne Kolbasuk McGee

Marianne Kolbasuk McGee

Executive Editor, HealthcareInfoSecurity, ISMG

McGee is executive editor of Information Security Media Group's HealthcareInfoSecurity.com media site. She has about 30 years of IT journalism experience, with a focus on healthcare information technology issues for more than 15 years. Before joining ISMG in 2012, she was a reporter at InformationWeek magazine and news site and played a lead role in the launch of InformationWeek's healthcare IT media site.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing ransomware.databreachtoday.com, you agree to our use of cookies.