Imagine your identity gets stolen or misconfigured online, resulting in serious personal damages. The cause was an error rooted in Artificial Intelligence (AI) technology, equipment with no face or name.
So, who’s to blame? Is it the company that hosts the technology, the state who commissioned it, the worker who created a certain piece of code, or someone else altogether?
This game of Clue is something that has had and will continue to have monumental consequences and questions for years to come. UNM Law Professor Sonia Gipson Rankin, however, is one step closer to finding out the answers.
“What happens when a state turns to an algorithm to help the community and it actually harms the community?” Gipson Rankin asked.
In a paper soon to be published with New York University Law Review entitled “The MiDAS Touch: Atuahene's "Stategraft" and the Implications of Unregulated Artificial Intelligence” Gipson Rankin explores the infamous Michigan Integrated Data Automated System (MiDAS) incident.
To combat billions of dollars owed to the federal government set by the Great Recession, Michigan set its sights on modernizing and cutting what was seen as unnecessary expenses from the Unemployment Insurance Agency (UIA) with MiDAS in 2013.
The state laid off hundreds after spending years and $47 million on this software. MiDAS was meant to automatically detect those who were committing unemployment fraud, as well as determine unemployment eligibility, track cases, and monitor income tax refunds.
From Oct. 2013 to Sept. 2016, MiDAS did its job–so well in fact, it found fraud cases had tripled to over 25,000 in just one year. By two years, that total had surpassed 40,000. With unemployment fraud claims stretching back so far, these tens of thousands of people faced fines 400% higher than usual. This produced $96 million dollars, a glowing total which would have made a huge dent in Michigan’s debt.
The only issue was that 93% of these charges were wrong.
“The concern with AI being implemented in communities without proper oversight is that by the time we understand harm has occurred, it’s already harmed hundreds of thousands of people,” Gipson Rankin said.
Something within the AI had skipped due process for individuals who were entirely in the right. It was given permission to automatically find people, or put in requests to the IRS to garnish wages, or to deduct tax returns, no matter the duration of unemployment, or how long ago they were unemployed.
When Michigan citizens called to see why this was happening, no one could give them an answer. State workers, similarly, found no evidence of fraud for the overwhelming majority of cases.
“When people called there was no human which could explain what happened or why the response basically became the AI said you did this,” she said.
At first, those accused turned to UIA for answers. The UIA looked at the state. The state turned to technology vendors Fast Enterprises and SAS Institute. They turned to management consultant CSG Government Solutions.
They were all faced with the same predicament: a guessing game of who’s to blame.
“If I go after the state, they say the AI did it. If I go after the third party vendor, there’s a clause to protect them, saying the state made the choice. It leaves the actual person harmed by AI without a lot of options,” she said.
After multiple trips to court, the state of Michigan agreed to fork out $20.8 million so far in damages to make up the money deducted from those falsely accused of fraud.
That wasn’t enough, according to Gipson Rankin. Many of those affected felt the same.
In Cahoo v. SAS Analytics, the state felt restitution had been satisfied by giving out refunds. Plaintiffs argued that their due-process rights were violated beyond finances as they had to untangle themselves from the fraud allegations.
“How do I give back or address the fact you may have had to file for bankruptcy? How do I address the fact that while all this was happening, you may have lost a new job because you've been marked in the systems as committing unemployment fraud” Gipson Rankin said. “How do I address the fact that families may have broken up– that people were evicted from homes because of the marks that were put on their name?”
The Michigan State Supreme Court sided with the plaintiffs, saying the attempt to use the “AI made me do it” defense was found to be insufficient.
Not only that, but the state is still working through the remaining payments.
As residents work to get their recourse, questions remain for legal minds like Gipson Rankin.
How can you prevent biases from existing in AI to begin with? Can you truly have an ambivalent technology as a result? And how much further will AI go without having them answered?
“When technology is unregulated, it will flourish into all kinds of unique innovations. But there are some parts of it that when unregulated lead to grave disaster.” – Sonia Gipson-Rankin
In March 2022, Michigan Governor Gretchen Whitmer planned to allocate $75 million to replace the MiDAS system, looking for a “human-centered” system.
Gipson-Rankin believes going forward, there need to be established groups and discussions to answer these questions before AI gets too developed and in the weeds.
“I think we'll see a lot of what the AI community is doing continue to go underground, where people can't unpack where the harm came from,” she said,
She also is working with other professors on potentially developing a course for algorithmic justice at UNM.
“It's going to require all of us at the table that can make it right from the very beginning,” she said.
You can read this full research paper and learn more about the MiDAS incident by following the link here.