Proponents of AI and other optimists are often ready to acknowledge the numerous problems, threats, dangers, and downright murders enabled by these systems to date. But they also dismiss critique and assuage skepticism with the promise that these casualties are themselves outliers — exceptions, flukes — or, if not, they are imminently fixable with the right methodological tweaks.
Common practices of technology development can produce this kind of naivete. Alberto Toscano calls this a “Culture of Abstraction.” He argues that logical abstraction, core to computer science and other scientific analysis, influences how we perceive real-world phenomena. This abstraction away from the particular and toward idealized representations produces and sustains apolitical conceits in science and technology. We are led to believe that if we can just “de-bias” the data and build in logical controls for “non-discrimination,” the techno-utopia will arrive, and the returns will come pouring in. The argument here is that these adverse consequences are unintended. The assumption is that the intention of algorithmic inference systems is always good — beneficial, benevolent, innovative, progressive.
Stafford Beer gave us an effective analytical tool to evaluate a system without getting sidetracked arguments about intent rather than its real impact. This tool is called POSIWID and it stands for “The Purpose of a System Is What It Does.” This analytical frame provides “a better starting point for understanding a system than a focus on designers’ or users’ intention or expectations.”
I not disabled, and I’ve had the same problems with HMO healthcare.
Those organizations drive decisions based on statistics, not the individual. I’ve seen my doctors working to find ways describe/categorize my problems so they could justify the treatment they felt was most appropriate (only after working through numerous doctors in the organization - one actually said “Well, I guess you’re just going to have to learn to live with the pain”).
Walking into an independent doctor office is completely different - they’re quick to work toward a solution, and move to a different approach when they see things aren’t improving. Because they don’t have to justify their actions to a risk/cost-management board.
Interestingly, the HMOs don’t hesitate to do surgeries. Never had any pushback there, even for things with moderate risk, but relatively low need.
I understand this is partially because I have the mindset of the programmer they’re referring to, but this sounds really interesting
Rather than looking to big data for solutions to hegemonically defined problems, what if we used it to find the catalysts of inequality themselves
…
What are the conditions in which the outlier is culled? What if we used AI to identify the pruning mechanism and dismantle it?
Using more in depth analysis of what gets pruned to understand why it’s being pruned is a very interesting concept to find marginalized groups
I don’t know how to fix those underlying problems, but identifying them and showing that data to leaders seems like a really good endeavor
That kind of analysis is done all the time. But, even if we can collect all the relevant data (big if), the methods required are difficult to interpret and easy to abuse (we can’t do an RCT of being born female vs male, or black vs white, &c). A good example is the proliferation of analyses claiming that the gender pay gap does not exist (after you’ve ‘controlled’ for all the things that cause the gender pay gap).
It’s not easy to do ‘right’ even when done in good faith.
The article isn’t claiming that it is easy, of course. It’s asking why power is so keen on one type of question and not its inverse. And that is a very good question, albeit one with a very easy answer. Power is not in the business of abolishing itself.
after you’ve controlled for all the things that cause the gender pay gap
Isn’t that a continuation of “why the outlier was culled”?
More emphasis on how the data set is selected (while hard) is very useful
Isn’t that a continuation of “why the outlier was culled”?
Not sure I follow, but I think the answer is “no”.
If you control for all the causes of a difference, the difference will disappear. Which is fine if you’re looking for causal factors which are not already known to be causal factors, but no good at all if you’re trying to establish whether or not a difference exists.
It’s really quite difficult to ask a coherent question with real-world data from the messy, complicated reality of human beings.
A simple example:
Women are more likely to die from complications after a coronary artery bypass.
But if you include body surface area (a measure of body size) in your model, the difference between men and women disappears.
And if you go the whole hog and measure vein size, the importance of body size disappears too.
And, while we can never do an RCT to prove it, it makes perfect sense that smaller veins would increase the risk for a surgery which involves operating on blood vessels.
None of that means women do not, in fact, have a higher risk of dying after coronary artery bypass surgery. Collect all the data which has ever existed and women will still be more likely to die from the surgery. We have explained the phenomenon and found what is very likely to be the direct cause of higher mortality. Being a woman just makes you more likely to have that risk factor.
It is rare that the answer is as neat and simple as this. It is very easy to ask a different question from the one you thought you were asking (or pretend to be answering one question when you answered another).
You can’t just throw masses of data into a pot and expect sensible answers to come out. This is the key difference between statisticians and data scientists. And, not to throw shade on data scientists, they often end up explaining to the world that oestrogen makes people more likely to die from complications of coronary artery bypass surgery.
Maybe it’s a crude interpretation, but over controlling for all the the cause of a change, and removing outliers in your data that is training these AI models seem like similar issues when trying to actually understand the data
The data cannot be understood. These models are too large for that.
Apple says it doesn’t understand why its credit card gives lower credit limits to women that men even if they have the same (or better) credit scores, because they don’t use sex as a datapoint. But it’s freaking obvious why, if you have a basic grasp of the social sciences and humanities. Women were not given the legal right to their own bank accounts until the 1970s. After that, banks could be forced to grant them bank accounts but not to extend the same amount of credit. Women earn and spend in ways that are different, on average, to men. So the algorithm does not need to be told that the applicant is a woman, it just identifies them as the sort of person who earns and spends like the class of people with historically lower credit limits.
Apple’s ‘sexist’ credit card investigated by US regulator
Garbage in, garbage out. Society has been garbage for marginalised groups since forever and there’s no way to take that out of the data. Especially not big data. You can try but you just end up playing whackamole with new sources of bias, many of which cannot be measured well, if at all.
You are pointing out specific biases that we already know about. The article you posted seems to posit using the data to find the unknown biases we have as well
It’s asking why don’t we use it for that purpose, not suggesting that there is anything easy about doing so. I don’t know how you think science works, but it’s not like that.
Proponents of AI and other optimists are often ready to acknowledge the numerous problems, threats, dangers, and downright murders enabled by these systems to date
Edit: I see from the comments this is about insurance carriers… in that case it’s not tinfoil hat at all. The wording I quoted sucks though because it’s not the AI doing it any more than it’s the hammer that drives a nail sideways.
Where did you get insurance carriers from?
No idea what your post, before or after edit, is trying to say. But the subject of your quoted sentence is “proponents of AI” not “AI”, and the sentence is about what is enabled by AI systems. Your attempt at pedantry makes no sense.
If you’re suggesting that it is possible to build an AI with none of the biases embedded in the world it learns from, you might want to read that article again because the (obvious) rebuttal is right there.
The systems didn’t do anything they weren’t told to do. You’re correct that it says proponents, but they knew what it was doing and kept doing it because it was giving them the answers they wanted regardless of reality. The AI is still like the hammer.
The systems didn’t do anything they weren’t told to do.
You’re thinking of the kinds of algorithms written by human beings. AI is a black box. No one knows how these models obtain their answers.
Thats only true in the same sense that “no one knows how brains work” we understand bits and the low level and can constuct heuristics at a high level but have difficulty linking the two. That is not to say human minds or neural netwirks and entirely unpredictable and produce functionally random outputs that cant be reasoned about.
Im not saying there is any thought going on, im saying a lack of mapping from low level processes to high level outcomes does not mean a system is entirely inscrutable.
But for reference you link has nothing to say about the amount of thought done, sexists have thoughts when they think women are lesser.Shit thoughts but its still thinking.
That’s not how programming works.
It’s how LLMs work.
Sure thing bud.
I’m not really sure what the author is trying to do here. The way he plays with the meaning of words, like “culling the outlier” is literary interesting. But it is also actively harmful to understanding or bettering the issues raised.
“AI” is interpreted as “algorithmic inferences.” This paves over any of the technical distinctions between statistics, ML, AI, and neural nets. In the current hype, the term AI is often narrowed down to mean neural nets but the author widens the meaning. In the text, “AI” includes any kind of bureaucratic or rule-based decision-making.
The effect is to transfer responsibility away from decision-makers, organizations, and even society, at large, to a vaguely understood new technology.
I can see that this could be welcome to these decision-makers and organizations. And so it has the potential to attract funding from them. Perhaps that is the point.
The way he plays with the meaning of words
She (or, if you’re not sure, they).
any kind of bureaucratic or rule-based decision-making
Human-written rules are often flawed, and for similar reasons (the sole human thought process that ‘AI’ is very good at reproducing is system justification). But human-written rules can be written down and they can be interrogated. But Apple landed itself in court because it had no clue how its credit algorithm worked and could not conceive how it could possibly be sexist if the machine didn’t get any gender data to analyse.
Perhaps that is the point.
That is, indeed, the point.
That is, indeed, the point.
I think you misunderstand. She is shifting responsibility.
But Apple landed itself in court because it had no clue how its credit algorithm worked and could not conceive how it could possibly be sexist if the machine didn’t get any gender data to analyse.
This appears to be wrong.