If you haven’t done so, you should read Nick Carr’s new essay in the Atlantic on the costs of automation. I’ve been mulling it over and am not sure quite what I think.
After describing two air crashes that happened in large part because pilots accustomed to automated flying were unprepared to take proper control of their planes during emergencies, Carr comes to his key point:
The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.
And late in the essay he writes,
In schools, the best instructional programs help students master a subject by encouraging attentiveness, demanding hard work, and reinforcing learned skills through repetition. Their design reflects the latest discoveries about how our brains store memories and weave them into conceptual knowledge and practical know-how. But most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity. Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience. We pick the program that lightens our load, not the one that makes us work harder and longer. Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
Carr isn’t arguing here that the automating of tasks is always, or even usually, bad, but rather than the default assumption of engineers — and then, by extension, most of the rest of us — is that when we can automate we should automate, in order to eliminate that pesky thing called “human error.”
Carr’s argument for reclaiming a larger sphere of action for ourselves, for taking back some of the responsibilities we have offloaded to machines, seems to be twofold:
1) It’s safer. If we continue to teach people to do the work that we typically delegate to machines, and do what we can to keep those people in practice, then when the machines go wrong we’ll have a pretty reliable fail-safe mechanism: us.
2) It contributes to human flourishing. When we understand and can work within our physical environments, we have better lives. Especially in his account of Inuit communities that have abandoned traditional knowledge of their geographical surroundings in favor of GPS devices, Carr seems to be sketching out — he can’t do more in an essay of this length — an account of the deep value of “knowledge about reality” that Albert Borgmann develops at length in his great book Holding on to Reality.
But I could imagine people making some not-obviously-wrong counterarguments — for instance, that the best way to ensure safety, especially in potentially highly dangerous situations like air travel, is not to keep human beings in training but rather to improve our machines. Maybe the problem in that first anecdote Carr tells is setting up the software so that in certain kinds of situations responsibility is kicked back to human pilots; maybe machines are just better at flying planes than people are, and our focus should be on making them better still. It’s a matter of properly calculating risks and rewards.
Carr’s second point seems to me more compelling but also more complicated. Consider this: if the Inuit lose something when they use GPS instead of traditional and highly specific knowledge of their environment, what would I lose if I had a self-driving car take me to work instead of driving myself? I’ve just moved to Waco, Texas, and I’m still trying to figure out the best route to take to work each day. In trying out different routes, I’m learning a good bit about the town, which is nice — but what if I had a Google self-driving car and could just tell it the address and let it decide how to get there (perhaps varying its own route based on traffic information)? Would I learn less about my environment? Maybe I would learn more, if instead of answering email on the way to work I looked out the window and paid attention to the neighborhoods I pass through. (Of course, in that case I would learn still more by riding a bike or walking.) Or what if I spent the whole trip in contemplative prayer, and that helped me to be a better teacher and colleague in the day ahead? I would be pursuing a very different kind of flourishing than that which comes from knowing my physical environment, but I could make a pretty strong case for its value.
I guess what I’m saying is this: I don’t know how to evaluate the loss of “knowledge about reality” that comes from automation unless I also know what I am going to be doing with the freedom that automation grants me. This is the primary reason why I’m still mulling over Carr’s essay. In any case, it’s very much worth reading.
3 Comments
Comments are closed.
I dunno, at the end of the day, while I agree that such counterarguments are not obviously wrong, I just can't see them as strong enough to overcome Carr's. So much could change, and will change. No matter how much we improve our technology, its functionality will be ultimately be determined by some external circumstances beyond our control. We could run out of the resources to manufacturer computers the way we do now; some Anonymous 2.0 group could infect auto-pilot and navigation systems with viruses or malware. Most likely, the manufacturers themselves could just change the software in such a way that they believe they're improving it while overlooking something crucial they're losing. I don't have an iPhone, but isn't that what's happened with iOS 7?
No situation is foolproof, but it just seems indisputably safer to me to ensure that we build our solutions in ways that continue to require human attention. We don't know much about the future of humanity except that, by definition, there will still be humans around for it. And we can say with some certainty that at least for the foreseeable future, they're generally going to be more flexible, easier to communicate with, and better at grasping the context of a situation than computers.
I hear what you're saying, jw, but I don't think we can speak so generally. We have to think about specific cases. A hundred years ago people had to know how to maintain and repair their own automobiles, something which very few of us could do today — indeed, few mechanics could do much to repair automobiles without advanced computerized diagnostics – and yet overall I think the trade-offs, in the realm of safety among others, are very much in favor of the modern car. There are many areas where machines are capable of precision and reliability that human beings can't hope to match. So how do we draw the line in specific cases? — that's what we have to be asking.
I totally agree — I just think that it's a line we're going to have to draw over and over and over again, and that we may even have to redraw from time to time. And my instinct is that our very ability to draw that line, to make the right kind of judgment, is closely tied to our maintaining a sense of engagement with our technology like what Carr recommends. Without a certain level of intimacy and direct involvement with our media, how do we carve out a comfortable, functional, healthy relationship with each of them?