Article 6JMC6 Feds Have Warned Medicare Insurers That ‘AI’ Can’t Be Used To (Incompetently And Cruelly) Deny Patient Care

Feds Have Warned Medicare Insurers That ‘AI’ Can’t Be Used To (Incompetently And Cruelly) Deny Patient Care

by
Karl Bode
from Techdirt on (#6JMC6)
Story Image

AI" (or more accurately language learning models nowhere close to sentience or genuine awareness) has plenty of innovative potential. Unfortunately, most of the folks actually in charge of the technology's deployment largely see it as a way to cut corners, attack labor, and double down on all of their very worst impulses.

Case in point: AI's" rushed deployment in journalism has been akeystone-cops-esque mess. The brunchlord types in charge of most media companies were so excited to get to work undermining unionized labor and cutting corners that they immediately implemented the technology without making sure it actually works. The result:plagiarism, bullshit, a lower quality product, and chaos.

Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake.

For example UnitedHealthcare and Humana, two of the largest health insurance companies in the US, have been using AI" to determine whether elderly patients should be cut off from Medicare benefits. If you've navigated this existing system on behalf of an elderly loved one, you likely know what a preposterously heartless shitscape this whole system already is long before automation gets involved.

Not surprisingly, neither Humana or UnitedHealthcare's implementation of AI" was done well. A recent investigation by STAT of the system they're using (nH Predict)showed the AI consistently made major errors90% of the time, cutting elderly folks off from needed care prematurely, often with little recourse by patients or families. Both companies are facing class actions.

Any sort of regulatory response has, unsurprisingly, been slow to come by courtesy of a corrupt and incompetent Congress. The best we've gotten so far is a recent memo by the Centers for Medicare & Medicaid Services (CMS), sent to all Medicare Advantage insurers, informing them they shouldn't use LLMs to determine care or deny coverage to members on Medicare Advantage plans:

For coverage decisions, insurers must base the decision on the individual patient's circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient's medical history, the physician's recommendations, or clinical notes would not be compliant," the CMS wrote."

There's no indication yet of any companies facing actual penalties for the behavior. The letter notes that insurers can use AI to determine whether an insurer is following plan rules, but it can't be used to sever grandma from essential care. Because, as we've seen repeatedly, LLMs are prone to error and fabulism, and executives of publicly traded companies are prone to fatal corner cutting.

Granted this is only one segment of one industry where undercooked AI is being rushed into deployment by executives who see the technology primarily as a corner cutting, labor undermining shortcut to greater wealth. The idea that Congress, regulators, or class actions lay down some safety guard rails before this kind of sloppy AI results in significant mass suffering is likely wishful thinking.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments