How Relying on Algorithms and Bots Can Be Really, Really Dangerous

How Relying on Algorithms and Bots Can Be Really, Really Dangerous

Print Friendly, PDF & Email

So you can’t wait for a self-driving car to take away the drudgery of driving? Me neither! But consider this scenario, recently posed by neuroscientist Gary Marcus: Your car is on a narrow bridge when a school bus veers into your lane. Should your self-driving car plunge off the bridge—sacrificing your life to save those of the children?

Obviously, you won’t make the call. You’ve ceded that decision to the car’s algorithms. You better hope that you agree with its choice.

This is a dramatic dilemma, to be sure. But it’s not a completely unusual one. The truth is, our tools increasingly guide and shape our behavior or even make decisions on our behalf. A small but growing chorus of writers and scholars think we’re going too far. By taking human decisionmaking out of the equation, we’re slowly stripping away deliberation—moments where we reflect on the morality of our actions.