Well, I’ve seen some inept commentary on the recent Facebook fiasco, but this definitely takes the cake — and it’s from a Cesar Hidalgo, a prof at MIT, no less.
Talk about an inauspicious beginning:
First, Facebook is a “micro-broadcasting” platform, meaning that it is not a private diary or a messaging service. This is not an official definition, but one that emerges from Facebook’s design: everything you post on Facebook has the potential to go viral.
Well, first of all, no. Facebook has settings that allow you to determine how private or public you want to given post to be: see? So some of what you post on Facebook cannot go viral, unless the software malfunctions, or Facebook makes yet another unannounced change in its policies. And second: the point is completely irrelevant. Though Facebook has often been in trouble for betraying its users’ expectations of privacy — by making public what they didn’t want made public — that isn’t what this is about. The complaint is that Facebook experimented on its users without seeking their consent.
Second, the idea that the experiment violated privacy is also at odds with the experimental design. After all, the experiment was based on what is known technically as a sorting operation. Yet, a sorting operation cannot violate privacy.
That’s manifestly untrue, but it doesn’t matter: the point is irrelevant. Though Facebook has often been in trouble for betraying its users’ expectations of privacy, that isn’t what this is about. The complaint is that Facebook experimented on its users without seeking their consent.
Finally, it is important to remember that Facebook did not generate the content that affected the mood of users. You and I generated this content. So if we are willing to point the gun at Facebook for sorting the content created by us, we should also point the gun at ourselves, for creating that content.
Sometimes a statement gets so Orwellian that there’s nothing to be said in response. Onward:
Is using sentiment analysis as a feature unethical? Probably not. Most of us filter the content we present to others based on emotional considerations. In fact, we do not just filter content. We often modify it based on emotional reasons. For instance, is it unethical to soften an unhappy or aggressive comment from a colleague when sharing it with others? Is that rewording operation unethical? Or does the failure of ethics emerge when an algorithm — instead of, say, a professional editor — performs the filtering?
Ah, Artie McStrawman — pleased to see you, my old friend. Of course no one has ever said that filtering information is wrong. The complaint here is that Facebook filtered people’s feeds in order to conduct experiments on them without seeking their consent. Hidalgo has written an 1100 word post that shows no sign at any point of having even the tiniest shred of understanding of what people are angry at Facebook about. This is either monumental dishonesty or monumental stupidity. I can’t see any other alternative.
1 Comments
Comments are closed.
What's really striking is how this same kind of obtuseness has characterized Facebook's own response to the uproar. This is from Sheryl Sandberg: “We clearly communicated really badly about this and that we really regret. We do research in an ongoing way, in a very privacy protective way, to improve our services and this was done with that goal.” And this is from a statement issued by the company's European policy head: “We want to do better in the future and are improving our process based on this feedback. The study was done with appropriate protections for people’s information, and we are happy to answer any questions regulators may have.” The ethical issues don't seem to have registered at all. I don't think it's either stupidity or dishonesty in this case, though. I think it's just that this kind of psychological and behavioral testing has long been so routine in the company – and so essential to its core algorithms – that the idea that there might be something unethical about it is simply unimaginable to them. To them, the problem looks entirely like a "communication" problem – ie, that they allowed this one particular study to be published.