This is part 1 of an ongoing investigative series.
An algorithm, not a doctor, predicted a rapid recovery for Frances Walter, an 85-year-old Wisconsin woman with a shattered left shoulder and an allergy to pain medicine. In 16.6 days, it estimated, she would be ready to leave her nursing home.
On the 17th day, her Medicare Advantage insurer, Security Health Plan, followed the algorithm and cut off payment for her care, concluding she was ready to return to the apartment where she lived alone. Meanwhile, medical notes in June 2019 showed Walter’s pain was maxing out the scales and that she could not dress herself, go to the bathroom, or even push a walker without help.
- Part 2: Denied By AI: How UnitedHealth’s acquisition of a popular Medicare Advantage algorithm sparked internal dissent over denied care
- Part 3: Denied By AI: UnitedHealth pushed employees to follow an algorithm to cut off Medicare patients’ rehab care
- Part 4: Denied By AI: UnitedHealth used secret rules to restrict rehab care for seriously ill Medicare Advantage patients
AI is being used as a means of diverting blame from humans onto a black box. It’s not inherently bad of itself, but the current hype around it is allowing it to be used in ways it shouldn’t be.