AI Cannot Be Used To Deny Health Care Coverage, Feds Clarify To Insurers 81
An anonymous reader quotes a report from Ars Technica: Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo (PDF) sent to all Medicare Advantage insurers. The memo -- formatted like an FAQ on Medicare Advantage (MA) plan rules -- comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed, AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.
According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don't match prescribing physicians' recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth's MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege.
It's unclear how nH Predict works exactly, but it reportedly uses a database of 6 million patients to develop its predictions. Still, according to people familiar with the software, it only accounts for a small set of patient factors, not a full look at a patient's individual circumstances. This is a clear no-no, according to the CMS's memo. For coverage decisions, insurers must "base the decision on the individual patient's circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient's medical history, the physician's recommendations, or clinical notes would not be compliant," the CMS wrote. "In all, the CMS finds that AI tools can be used by insurers when evaluating coverage -- but really only as a check to make sure the insurer is following the rules," reports Ars. "An 'algorithm or software tool should only be used to ensure fidelity,' with coverage criteria, the CMS wrote. And, because 'publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time' or apply hidden coverage criteria."
The CMS also warned insurers to ensure that any AI tool or algorithm used "is not perpetuating or exacerbating existing bias, or introducing new biases." It ended its notice by telling insurers that it is increasing its audit activities and "will be monitoring closely whether MA plans are utilizing and applying internal coverage criteria that are not found in Medicare laws."
According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don't match prescribing physicians' recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth's MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege.
It's unclear how nH Predict works exactly, but it reportedly uses a database of 6 million patients to develop its predictions. Still, according to people familiar with the software, it only accounts for a small set of patient factors, not a full look at a patient's individual circumstances. This is a clear no-no, according to the CMS's memo. For coverage decisions, insurers must "base the decision on the individual patient's circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient's medical history, the physician's recommendations, or clinical notes would not be compliant," the CMS wrote. "In all, the CMS finds that AI tools can be used by insurers when evaluating coverage -- but really only as a check to make sure the insurer is following the rules," reports Ars. "An 'algorithm or software tool should only be used to ensure fidelity,' with coverage criteria, the CMS wrote. And, because 'publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time' or apply hidden coverage criteria."
The CMS also warned insurers to ensure that any AI tool or algorithm used "is not perpetuating or exacerbating existing bias, or introducing new biases." It ended its notice by telling insurers that it is increasing its audit activities and "will be monitoring closely whether MA plans are utilizing and applying internal coverage criteria that are not found in Medicare laws."