2019-10-05
|~4 min read
|729 words
I first came across Benedict Evans through a fascinating interview he did with Russ Roberts on EconTalk to discuss the future of cars.1 The depth with which he’d considered the space and his ability to make compelling arguments for second and third order effects (such as the potential devastation to the market for mechanics as electric vehicles break a lot less than internal combustion engines) struck me and he’s been someone I’ve been trying to follow ever since.
Over the past few months Evans has written about AI several times. I found his posts on bias in and the ethics of AI particularly balanced, considered, and cogent.2, 3
On bias within of AI, Evans writes:
[T]he idea that you can audit and understand decision-making in existing systems or organisations is true in theory but flawed in practice. It is not at all easy to audit how a decision is taken in a large organisation. […] As my colleague Vijay Pande argued here , people are black boxes too - combine thousands of people in many overlapping companies and institutions and the problem compounds. […] In this context, I often compare machine learning to databases, and especially relational databases - a new fundamental technology that changed what was possible in computer science and changed the broader world, that became a commodity that was part of everything, and that we now use all the time without noticing. But databases had problems too, and the problems had the same character: the system could be built on bad assumptions, or bad data, it would be hard to tell, and the people using it would do what the system told them without questioning it. […] All of this is to say that ML bias will cause problems, in roughly the same kinds of ways as problems in the past, and will be resolvable and discoverable, or not, to roughly the same degree as they were in the past. Hence, the scenario for AI bias causing harm that is easiest to imagine is probably not one that comes from leading researchers at a major institution. Rather, it is a third tier technology contractor or software vendor that bolts together something out of open source components, libraries and tools that it doesn’t really understand and then sells it to an unsophisticated buyer that sees ‘AI’ on the sticker and doesn’t ask the right questions, gives it to minimum-wage employees and tells them to do whatever the ‘AI’ says. This is what happened with databases. This is not, particularly, an AI problem, or even a ‘software’ problem. It’s a ‘human’ problem.
In terms of the ethics surrounding AI, Evans returns frequently to a model based on the advent and popularization of the relational database in the 1970s and 1980s:
Specifically, we worried about two kinds of problem:
We worried that these databases would contain bad data or bad assumptions, and in particular that they might inadvertently and unconsciously encode the existing prejudices and biases of our societies and fix them into machinery. We worried people would screw up.
And, we worried about people deliberately building and using these systems to do bad things
That is, we worried what would happen if these systems didn’t work and we worried what would happen if they did work.
The remainder of the article dives more deeply into these topics and concludes with thoughts on the challenges ahead, particularly from a regulatory perspective.
When it comes to AI I’m fairly skeptical about many of the claims being made today and I am able to hold both concerns Evans noted simultaneously about the future. While I’m not racing to reshape my career into one focused on ML, Evans has convinced me to at least temper my skepticism and showed me how to read news about advancements with a more discerning eye.
Hi there and thanks for reading! My name's Stephen. I live in Chicago with my wife, Kate, and dog, Finn. Want more? See about and get in touch!