Scan barcode
A review by cpope9
Weapons of Math Destruction by Cathy O'Neil
3.0
I think that the thesis of this book is becoming increasingly more relevant as time moves forward. I think that the ideas in this book need to be considered as society and democracy move throughout the digital age. Without consideration, institutions and society will be increasingly tested by grosser inequalities and human rights abuses. Very intriguing premise and topic underpinning the darker and flawed practical aspects and applications of big data analytics.
2024: 3.5 stars. oh how things have only gotten worse. This book is still eye opening and relevant but things have only been further devolved since I first read this. With part of my career leading a data team, I see first hand how small details and assumptions in modeling can dramatically paint different pictures about reality. When used out of context, those models can inadvertently create or perpetuate systemic inequity, disadvantage, and discrimination. For anyone who doesn’t think “systemic racism” isn’t real, read this or talk to any corporate quant about their models’ assumptions about race. This book does a decent job at explaining key areas of companies or institutions using data models in ignorance, negligence, or aggression to cause harm for personal benefit without regard to broader social or system impacts. This still happens everywhere and I’m not sure it’s avoidable or regulatable at this point. But the author does a great job to paint a clear picture of the issue at hand.
However, what I really wish was here was some synthesis of the research/ case studies that created a theory/checklist/definition of attributes of the WMDs she discusses. This book is mostly “data can be used badly. Here’s 30 examples from 15 different fields. We need to fix this.” I appreciate the examples, but have each example tie into a grander “this is how you can tell your data model is harmful” or “this is how to know if your data is being used against you” type of summary. It would serve as a much more useful and testable conclusion than just “big data…bad!!!!!?”
Also would’ve loved some cases of big data being used well or productively as a counterexample to the broader theory that should have been posited here. But mostly just the bad stuff…which is still interesting but there’s so much more good intentions that turn into ignorantly or negligently problematic models fueling than initial or intentional malice behind the days…and that needs to be further explored (but isn’t).
2024: 3.5 stars. oh how things have only gotten worse. This book is still eye opening and relevant but things have only been further devolved since I first read this. With part of my career leading a data team, I see first hand how small details and assumptions in modeling can dramatically paint different pictures about reality. When used out of context, those models can inadvertently create or perpetuate systemic inequity, disadvantage, and discrimination. For anyone who doesn’t think “systemic racism” isn’t real, read this or talk to any corporate quant about their models’ assumptions about race. This book does a decent job at explaining key areas of companies or institutions using data models in ignorance, negligence, or aggression to cause harm for personal benefit without regard to broader social or system impacts. This still happens everywhere and I’m not sure it’s avoidable or regulatable at this point. But the author does a great job to paint a clear picture of the issue at hand.
However, what I really wish was here was some synthesis of the research/ case studies that created a theory/checklist/definition of attributes of the WMDs she discusses. This book is mostly “data can be used badly. Here’s 30 examples from 15 different fields. We need to fix this.” I appreciate the examples, but have each example tie into a grander “this is how you can tell your data model is harmful” or “this is how to know if your data is being used against you” type of summary. It would serve as a much more useful and testable conclusion than just “big data…bad!!!!!?”
Also would’ve loved some cases of big data being used well or productively as a counterexample to the broader theory that should have been posited here. But mostly just the bad stuff…which is still interesting but there’s so much more good intentions that turn into ignorantly or negligently problematic models fueling than initial or intentional malice behind the days…and that needs to be further explored (but isn’t).