Sometimes editors at media outlets get a little panicked when there’s a big story swirling around and they haven’t done anything with it. Last weekend, for example, a story about scientific ethics and Facebook swirled up like a sudden storm, and by Monday, some news outlets were playing catch up.
It all started as a largely ignored paper about the number of positive and negative words people use in Facebook posts. Now it’s a major scandal. Yesterday the New York Times connected the Facebook experiment to suicides. The story was headlined, Should Facebook Manipulate Users, and it rests on the questionable assumption that such manipulation has happened:
The researchers were studying claims that Facebook could make us feel unhappy by creating unrealistic expectations of how good life should be. But it turned out that some subjects were depressed when the good news in their feed was suppressed. Individuals were not asked to report on how they felt; instead, their writing was analyzed for vocabulary choices that were thought to indicate mood.
Stories that ran over the weekend raised serious questions about the lack of informed consent used in the experiment, which was done by researchers at Cornell and Facebook and published in the Proceedings of the National Academy of Sciences. But to say Facebook’s slight alteration of news feeds caused people to suffer depression seems to be unsupported by any kind of data or logic.
The piece is an op-ed written by Microsoft interdisciplinary researcher Jaron Lanier, and if you think he’s using the word depression in a colloquial sense, he reassures us he’s making a very serious accusation:
The manipulation of emotion is no small thing. An estimated 60 percent of suicides are preceded by a mood disorder. Even mild depression has been shown to increase the risk of heart failure by 5 percent; moderate to severe depression increases it by 40 percent.
Is there justification in linking the study’s outcome to depression or even suicide? What the researchers did was alter news feeds so that the unwitting subjects saw fewer posts with either positive or negative words, as determined by some computer algorithm. Then the researchers measured how many positive and negative words those subjects used in their own posts.
What they discovered was that the alteration didn’t have much influence – they measured just a .1% or one part in a thousand shift in the use of positive or negative words.
That this had anything to do with the subjects’ actual moods appears to be nothing but an assumption made by the researchers. It’s an assumption that makes their finding sexier, but as I noted in yesterday’s Tracker, people might just be mirroring their friends out of social grace, Some people might, for example refrain from boasting of a promotion if a friend has just posted that his beloved dog as died. And the effect was so small that the bigger surprise was how little influence the experiment had on the subjects.
When I wrote that post, I hadn’t quite pieced together how a largely ignored press release from early June led to the frenzy that has people cursing in outrage on Twitter.
The news got little play upon the initial publication. Somewhat later, New Scientist picked it up, running a small item late last week by Aviva Rutkin headlined, Even online, emotions can be contagious.
The story was not particularly inflammatory, though it took at face value the researchers’ claimed they had actually made people happier or sadder. It seemed to be the trigger for other sites to start expressing outrage on Friday. A site called Animal New York ran a story by Sophie Weiner headlined Facebook Experiment Manipulates Emotions Of 600,000 Users.
Apparently what many of us feared is already a reality: Facebook is using us as lab rats, and not just to figure out which ads we’ll respond to but to actually change our emotions.
And by Saturday the pack set in.
New Scientist backtracked, quickly adding a piece by psychologist Tal Yarkoni, headlined Don’t Fear Facebook’s Emotion Manipulation Experiment. The piece explained why Facebook probably didn’t achieve the kind of emotional manipulation being claimed. By by then the storm’s momentum was unstoppable.
There was some thoughtful discussion about informed consent and the ethics of using social networks for research unbeknownst to the subjects. But the accusations of harm don’t appear supportable. Yes, telling people that Facebook caused people to become sad or depressed or suicidal sounds more exciting, but from what the paper showed, it’s unlikely to be true.
It’s ironic that Facebook’s “manipulation” of news feeds led to an effect so small it’s almost immeasurable, but the researchers’ claim that they can push people’s emotional buttons set users ranting and cursing, at least on Twitter, creating an effect there that looks to be more substantial.
Наталья Капустина says
Кому суждено прожить долгую жизнь? >> http://goo.gl/JQHDUS По моему мнению Вы не правы. Я уверен. Могу отстоять свою позицию. Пишите мне в PM, пообщаемся.