Facebook: an unethical experiment?

I am a Clinical Psychologist and a Researcher. As a Researcher, I strive to uphold ethical principles in my research studies. In my area of expertise (implementing new technology to provide psychological interventions) I have to follow certain ethical standards. It should be without a question, that any experiment involving human subjects should be conducted respecting ethical principles.

However, over the last few days I read a lot of comments and blog posts about Facebook’s experiment (published in the Proceedings of the National Academy of Science). As a professional researcher, it’s my opinion that this study probably has bent ethical guidelines too far, and I want to explain why.

The history about ethical principles and Code of Conduct

From 1932 to 1972 the Tuskegee syphilis experiment took place. It was an infamous study about the progression of syphilis in rural African American men. The participants were given the information that they are receiving free health care from the government, however they were not aware that they were participating in a study. Most of the participants suffered from syphilis but were never told they had syphilis, nor were they ever treated for it.

This experiment caused a big scandal, resulting in certain regulations governing studies involving human participants, such as the Belmont Report about the ethical principles and guidelines for the protection of Human Subjects of Research. It states that “Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them. This opportunity is provided when adequate standards for informed consent are satisfied“. The importance of informed consent is unquestioned (providing information, comprehension and voluntariness). This is why we need ethical standards!

Similarly, the American Psychologic Association published “The ethical principles of Psychologists and Code of Conduct” online http://www.apa.org/ethics/code/index.aspx. These guidelines should be respected by researchers in any study, no matter who is funding the study.

These guidelines urge psychologists to take reasonable steps to ensure the competence of their work and to protect participants from harm. According to these guidelines, when we as social scientists conduct research, we obtain the informed consent of the individual or individuals using a language that is reasonably understandable to that person or persons. We inform participants about (1) the purpose of our research, the expected duration, and procedures, (2) their right to decline to participate and to withdraw from the research once participation has begun, (3) the foreseeable consequences of declining or withdrawing, (4) reasonably foreseeable factors that may be expected to influence their willingness to participate such as potential risks, discomfort, or adverse effects, (5) any prospective research benefits, (6) limits of confidentiality, (7) incentives for participation, and (8) whom to contact for questions about the research and research participants’ rights. Furthermore, we inform about (1) the experimental nature of the treatment, (2) the services that will or will not be available to the control group(s) if appropriate, (3) the means by which assignment to treatment and control groups will be made, (4) available treatment alternatives if an individual does not wish to participate in the research or wishes to withdraw once a study has begun.

And then comes Facebook’s experiment

A new paper in the Proceedings of the National Academy of Science revealed that Facebook was experimenting with the mood of almost 700,000 users. The purpose of the experiment was to study “emotional contagion through social networks”.

As a researcher, I was excited to read about it! Think about it, the data of 700,000 participants, a study in my area of expertise (psychology and new media), so I read the study with a great interest. Generally on Facebook, people express their emotions in posts and comments, which are later seen by other friends in their News Feed. The News Feed filters these posts by a ranking algorithm developed by Facebook. This algorithm is continually “developed and tested in the interest of showing viewers the content they will find most relevant and engaging”.

This is nothing new as we all know Facebook is experimenting with the News Feed for years. But when I read the study, I struggled with the following sentence: “One such test [of developing and testing an algorithm] is reported in this study: A test of whether posts with emotional content are more engaging”. When did people agree to participate in a study like this?

These tests involved experiments that measured the effects of an algorithm addressing a change of moods of almost 700,000 participants, conducted by researchers who are affiliated with the University of California and Cornell University.

They tested a hypothesis to determine if reducing the number of positive messages shown to people in their News Feed made those people less likely to post positive content themselves, and tested the same hypothesis for negative messages. The procedure involved tweaking an algorithm so that textual snippets in posts were analyzed to determine whether they contained positive or negative words. One part of the group was fed neutral to happy information, another part neutral to sad information. After that, the comments and posts of the participants were scanned for affective meanings. The outcomes indicate that Facebook can propagate positive feelings as well as negative feelings.

Bending ethical research standards?

The ethical principles and Code of Conduct for Psychologists clearly indicate the necessity of an informed consent. Facebook and the researchers’ only argument addressing ethics in the paper is that the research “was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”

Here is the critical part. Do we, Facebook Users, know that the News Feed is constantly changing and tested for new algorithms? Most probably yes. Do we, Facebook Users, agree with Facebook’s intent to tweak the News Feed algorithm to show us “important” content first? Probably yes. Would I ever agree to experiments addressing a change of my mood and I do not know when and where? Probably not without any information about the risks and consequences. Would I ever agree to an experiment that might probably make me sad deliberately and I don’t even know why I feel sad? No. I never agreed to any study like this, and I will never agree to any study like this as a participant. Whenever researchers try to change the mood of people , it is an experiment, and it requires informed consent.

This is why, in my opinion, this study probably goes beyond ethical standards. Furthermore, it is disappointing to see a study like this accepted in a highly ranged peer reviewed journal like the Proceedings of the National Academy of Science.

Can Facebook conduct any experiment without informing participants?

A relevant section of Facebook’s data use policy says: “… We receive data about you whenever you use or are running Facebook, such as when you look at another person’s timeline, send or receive a message, search for a friend or a Page, click on, view or otherwise interact with things, use a Facebook mobile app, or make purchases through Facebook… We use the information we receive about you in connection with the services and features we provide to you and other users … we may use the information we receive about you .. for internal operations, including troubleshooting, data analysis, testing, research and service improvement”.

Here, we have this unclear statement of “research” everyone who uses Facebook agreed to. It means every Facebook user has signed up for ANY and ALL experiments that they chose to conduct, including those that assess a change of our mood for the worse, without informed consent.

As a Clinical Psychologist and a professional researcher with ethical standards I struggle with this point and I think that this study is probably bending ethical standards and regulations too far. I would not operate with any data collected this way nor would I, as a reviewer, accept a study like this for a publication.

Who is liable for any consequences?

I want to explain why I struggle with it from a clinical-psychological perspective.

Imagine it: An estimated 5-10% of the adult population are suffering from depression, there are (conservatively estimated) at least 3,500 people suffering from depression, who participated in Facebook’s study. The experiment lasted seven days. At least 50% of the 3,500 people suffering from depression were part of an experiment. Facebook reduced the number of positive posts, resulting in a more negative mood for at least 50% of 3,500 depressive users. They experienced an even deeper depression for at least a week (or even longer, given the impact of experiments like this). What happened to these people? Furthermore, what happened to all other people, e.g. users with suicidal tendencies?

How can any scientist with a serious reputation a) agree to participate as an author or co-author in a study like this and b) handle data collected like this? Even more, how is it possible that a journal like the Proceedings of the National Academy of Science accept a study like this?

Conclusion

I am really happy that this experiment caused a huge uproar in social media as well as in the scientific community. It shows that there might be something fishy about Facebook’s experiment, and it provoked a big discussion about ethical standards in science.

As for the journal Proceedings of the National Academy of Science: I hope the editors think twice the next time they receive a paper like this. Even another colleague, Susan Fiske (a psychologist), who was editing the story for the journal, admitted serious qualms about the study. This leaves me questioning the quality of the study and the journal’s practice of accepting these studies in general.

As a Psychologist, I can only hope that it paves the way for better research in future with not bending ethical standards and respecting the rights of participants…

About the writer: Dr. Mario Lehenbauer-Baum is a Clinical Psychologist and Health Psychologist as well as a certified Industrial-/Organizational Psychologist (certified by the Board of Organizational Psychology – Austrian Psychologist Association), a motivational speaker and coach as well as a researcher concerning positive psychology. He is a passionate gamer and uses new technologies frequently. His research work combines (Clinical) Psychology/Organizational Psychology and new technology (e.g. online-based social skills trainings) as well as the “side effects” of using new technologies, such as being addicted to the internet, games or smart phones. In his coaching and therapy work he focuses on positive psychology to help people live a better life.