This is part 1 of an ongoing investigative series.
An algorithm, not a doctor, predicted a rapid recovery for Frances Walter, an 85-year-old Wisconsin woman with a shattered left shoulder and an allergy to pain medicine. In 16.6 days, it estimated, she would be ready to leave her nursing home.
On the 17th day, her Medicare Advantage insurer, Security Health Plan, followed the algorithm and cut off payment for her care, concluding she was ready to return to the apartment where she lived alone. Meanwhile, medical notes in June 2019 showed Walter’s pain was maxing out the scales and that she could not dress herself, go to the bathroom, or even push a walker without help.
- Part 2: Denied By AI: How UnitedHealth’s acquisition of a popular Medicare Advantage algorithm sparked internal dissent over denied care
- Part 3: Denied By AI: UnitedHealth pushed employees to follow an algorithm to cut off Medicare patients’ rehab care
- Part 4: Denied By AI: UnitedHealth used secret rules to restrict rehab care for seriously ill Medicare Advantage patients
This is one of the most annoying (dangerous in this case) trends of the AI rush. It has potential for incredible value, but that only depends on the people instituting it and the structures they have in place to ensure it’s successful.
I could see a world where the algorithm could receive input on the patients condition each day and modify its recommendations on that; like a Bayesian inference model. But that requires a statistician with some careful thought to set it all up, and executives wouldn’t be able to reduce headcount by several dozen because some guy sold them a black box that solves all their problems.
Oh I’m pretty sure THEY view it as a success. Old folks with large medical bills dying? That’s a feature, not a bug.