In Cathy O' Neil's 2016 book, Weapons of Math Destruction, the author discusses data driven decision-making beyond the financial sector, and raises ethical objections and questions regarding algorithms that make decisions about qualitative issues such as who is best qualified for a job, school, or promotion and who is not. O'Neil makes at least 2 major claims in this book: a) Our culture primes us to think of mathematical models as objective, impartial, fact-based and thus, crucially, *trustworthy* on the whole. and b) Algorithms turn out to have irrational and, more importantly, discriminatory effects which have already affected many and have the potential to increasingly confer advantages on those already privileged while compounding the disadvantages and problems of those*tagged* as liabilities or undesirables.
Because of the high degree of trust most of us place in mathematical models (despite the madness of the 2008 recession) they remain opaque to us. They are seldom challenged, and when they are challenged only a few of those affected by them ever get a chance to "look under the hood" to see just how they work, and what they really do when calculating decisions. They operate without public scrutiny or even awareness. If we do not start auditing and monitoring social algorithms they may, O'Neil suggests, amplify the pre-existing inequalities in our society. If such a phenomenon goes unchecked and unchallenged, then what started out as accidental bias might be jealously guarded by those who control and benefit from the technology. This could result in a technocratic power elite. Already, she suggests, people who are tagged by "bad" address, medical and psychiatric background, ethnicity, gender, educational affiliations, et al., are discriminated against. A certain address or school may carry less cultural capital or be correlated with race or ethnicity (e.g. Howard vs. Yale). So in the absence of transparency, with uninformed and credulous citizens relying on what they think are fair decisions, a technocracy could emerge which would no longer be a matter of cumulative accidental feedback loops, but a planned plutocracy in which the "winners" will have convinced themselves that they worked for and deserve their blessings. So that's the broad outline. Below is an excerpt from a larger review that originally appeared in Scientific American in August of 2017.
--------------------------------------------------------------------------------------------------------------
From Scientific American (8/16/17):
"Weapons of math destruction" [which the author refers to as WMDs]...are mathematical models or algorithms that claim to quantify important traits: teacher quality, recidivism risk, creditworthiness but have harmful outcomes and often reinforce inequality, keeping the poor poor and the rich rich. They have three things in common: opacity, scale, and damage. They are often proprietary or otherwise shielded from prying eyes, so they have the effect of being a black box. They affect large numbers of people, increasing the chances that they get it wrong for some of them. And they have a negative effect on people, perhaps by encoding racism or other biases into an algorithm or enabling predatory companies to advertise selectively to vulnerable people, or even by causing a global financial crisis.
She shares stories of people who have been deemed unworthy in some way by an algorithm. There’s the highly-regarded teacher who is fired due to a low score on a teacher assessment tool, the college student who couldn’t get a minimum wage job at a grocery store due to his answers on a personality test, the people whose credit card spending limits were lowered because they shopped at certain stores. To add insult to injury, the algorithms that judged them are completely opaque and unassailable. People often have no recourse when the algorithm makes a mistake[note: these are not actually "mistakes" but consequences of the design, which is the main point-ed].
O’Neil is an ideal person to write this book. She is an academic mathematician turned Wall Street quant turned data scientist who has been involved in Occupy Wall Street and recently started an algorithmic auditing company.She is one of the strongest voices speaking out for limiting the ways we allow algorithms to influence our lives and against the notion that an algorithm, because it is implemented by an unemotional machine, cannot perpetrate bias or injustice.
Many people think of Wall Street and hedge funds when they think of big data and algorithms making decisions. As books such as The Big Short and All the Devils Are Here grimly chronicle, subprime mortgages are a perfect example of a WMD. Most of the people buying, selling, and even rating them had no idea how risky they were, and the economy is still reeling from their effects.
O’Neil talks about financial WMDs and her experiences , but the examples in her book come from many other facets of life as well: college rankings, employment application screeners, policing and sentencing algorithms, workplace wellness programs, and the many inappropriate ways credit scores reward the rich and punish the poor. As an example of the latter, she shares the galling statistic that “in Florida, adults with clean driving records and poor credit scores paid an average of $1552 more than the same drivers with excellent credit and a drunk driving conviction.” (Emphasis hers.)
Many WMDs create feedback loops that perpetuate injustice. Recidivism models and predictive policing algorithms—programs that send officers to patrol certain locations based on crime data—are rife with the potential for harmful feedback loops. For example, a recidivism model may ask about the person’s first encounter with law enforcement. Due to racist policing practices such as stop and frisk, black people are likely to have that first encounter earlier than white people. If the model takes this measure into account, it will probably deem a black person more likely But they are harmful even beyond their potential to be racist. O’Neil writes,
A person who scores as ‘high risk’ is likely to be unemployed and toO’Neil’s book is important in part because, as she points out, an insidious aspect of WMDs is the fact that they are invisible to those of us with more power and privilege in this society. As a white person living in a relatively affluent neighborhood, I am not targeted with ads for predatory payday lenders while I browse the web or harassed by police officers who are patrolling “sketchy” neighborhoods because an algorithm sends them there. People like me need to know that these things are happening to others and learn more about how to fight them....
come from a neighborhood where many of his friends and family have had run-ins with the law. Thanks in part to the resulting high score on the evaluation, he gets a longer sentence, locking him away for more years in a prison where he’s surrounded by fellow criminals—which raises the likelihood that he’ll return to prison. He is finally released into the same poor neighborhood, this time with a criminal record, which makes itthat much harder to find a job. If he commits another crime, the recidivism model can claim another success. But in fact the model itselfcontributes to a toxic cycle and helps to sustain it.
In the last chapter, she shares some ideas of how we can disarm WMDs and use big data for good. She proposes a Hippocratic Oath for data scientists and writes about how to regulate math models.” [At present] we are not doing what we can—but there is hope as well. The technology exists! If we develop the will, we can use big data to advance equality and justice. [O'Neil has started to do just that. She is designing algorithms to "audit" potentially harmful algorithms.]
___________________________________________________________________
Quotes from the author:
-"There are ethical choices in every single algorithm we build...."
-"I saw all kinds of parallels between finance and Big Data. Both industries gobble up the same pool of talent, much of it from elite universities like MIT, Princeton and Stanford. These new hires are ravenous for success and have been focused on external metrics– like SAT scores and college admissions – their entire lives. Whether in finance or tech, the message they’ve received is that they will be rich, they they will run the world…"
-"In both of these industries, the real world, with all its messiness, sits apart. The inclination is to replace people with data trails turning them into more effective shoppers, voters, or workers to optimize some objective… More and more I worried about the separation between technical models and real people, and about the moral repercussions of that separation. If fact, I saw the same pattern emerging that I’d witnessed in finance: a false sense of security was leading to widespread use of imperfect models, self-serving definitions of success, and growing feedback loops. Those who objected were regarded as nostalgic Luddites."
-"I wondered what the analogue to the credit crisis might be in Big Data. Instead of a bust, I saw a growing dystopia, with inequality rising. The algorithms would make sure that those deemed losers would remain that way. A lucky minority would gain ever more control over the data economy, taking in outrageous fortunes and convincing themselves that they deserved it...."
__________________________________________________________________
O'Neil gave a very thought-provoking 12 minute TED talk on the issues raised in her book:
Here's a link to a google API that measures the "Toxicity" levels of typed words and sentences, for those who want to see how their own word choices are scored. https://www.perspectiveapi....
Questions to consider:
Do you think that we are moving towards a secretive Technocracy in which those who control algorithms that make fateful decisions are less and less accountable and transparent? Does the author go too far in suggesting that if algorithms for important decisions in society are not challenged we may well end up with a Plutocracy perpetuated by a techno-social elite?
Suppose everybody who designed or implemented these mathematical models was a) honest and b) well-intentioned. Would that prevent the ramping up of inequalities the author discusses? Is the problem one of the ethical integrity of those who control the machines, or is it deeper (e.g. quantifying merits and qualifications mechanically is bound to produce odd results)?
No comments:
Post a Comment