ADVERTISEMENT
  About the SA Blog Network













Doing Good Science

Doing Good Science


Building knowledge, training new scientists, sharing a world.
Doing Good Science Home

Some thoughts about human subjects research in the wake of Facebook’s massive experiment.

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



You can read the study itself here, plus a very comprehensive discussion of reactions to the study here.

1. If you intend to publish your research in a peer-reviewed scientific journal, you are expected to have conducted that research with the appropriate ethical oversight. Indeed, the submission process usually involves explicitly affirming that you have done so (and providing documentation, in the case of human subjects research, of approval by the relevant Institutional Review Board(s) or of the IRB’s determination that the research was exempt from IRB oversight).

2. Your judgment, as a researcher, that your research will not expose your human subjects to especially big harms does not suffice to exempt that research from IRB oversight. The best way to establish that your research is exempt from IRB oversight is to submit your protocol to the IRB and have the IRB determine that it is exempt.

3. It’s not unreasonable for people to judge that violating their informed consent (say, by not letting them know that they are human subjects in a study where you are manipulating their environment and not giving them the opportunity to opt out of being part of your study) is itself a harm to them. When we value our autonomy, we tend to get cranky when others disregard it.

4. Researchers, IRBs, and the general public needn’t judge a study to be as bad as [fill in the name of a particularly horrific instance of human subjects research] to judge the conduct of the researchers in the study unethical. We can (and should) surely ask for more than “not as bad as the Tuskegee Syphilis Experiment”.

5. IRB approval of a study means that the research has received ethical oversight, but it does not guarantee that the treatment of human subjects in the research will be ethical. IRBs can make questionable ethical judgments too.

6. It is unreasonable to suggest that you can generally substitute Terms of Service or End User License Agreements for informed consent documents, as the latter are supposed to be clear and understandable to your prospective human subjects, while the former are written in such a way that even lawyers have a hard time reading and understanding them. The TOS or EULA is clearly designed to protect the company, not the user. (Some of those users, by the way, are in their early teens, which means they probably ought to be regarded as members of a “vulnerable population” entitled to more protection, not less.)

7. Just because a company like Facebook may “routinely” engage in manipulations of a user’s environment doesn’t make that kind of manipulation automatically ethical when it is done for the purposes of research. Nor does it mean that that kind of manipulation is ethical when Facebook does it for its own purposes. As it happens, peer-reviewed scientific journals, funding agencies, and other social structures tend to hold scientists building knowledge with human subjects research to higher ethical standards than (say) corporations are held to when they interact with humans. This doesn’t necessarily means our ethical demands of scientific knowledge-builders are too high. Instead, it may mean that our ethical demands of corporations are too low.

Janet D. Stemwedel About the Author: Janet D. Stemwedel is an Associate Professor of Philosophy at San José State University. Her explorations of ethics, scientific knowledge-building, and how they are intertwined are informed by her misspent scientific youth as a physical chemist. Follow on Twitter @docfreeride.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 6 Comments

Add Comment
  1. 1. PierceAero 5:38 pm 06/30/2014

    I can’t find a way to actually share this on Facebook. Hmmm…

    Link to this
  2. 2. BillSkaggs 10:25 am 07/1/2014

    Hi Janet. In my view what makes this bad is not that FB manipulated the user experience and published the results. If they had done so by tweaking elements of the user interface, it would all be totally innocuous. The problem is that (a) they deliberately made the system worse, by deleting posts that users expected to be able to see, and (b) they did so for the purpose of manipulating users’ emotions.

    Best regards, Bill Skaggs

    Link to this
  3. 3. DocMLM 7:27 pm 07/1/2014

    According to other sources, the editor permitted publication after one of the authors told her that the study was approved by the University’s IRB. As IRB member myself, I find that difficult to believe. I would like to see the proof that it received IRB approval at both UCal and Cornell. I expect that, at some point in the near future, social science authors are going to have provide actual evidence (e.g. supply IRB approval document) for their research. Checking off a box that the study received IRB approval isn’t enough anymore. I know of several published studies that did not go through the IRB process and that didn’t stop the lead author from checking off the box.

    Link to this
  4. 4. FOOZLER8 2:03 pm 07/3/2014

    If there is no CLEAR informed consent it is unethical.

    w. f wallace, ph. d.

    Link to this
  5. 5. seanacoy 6:39 pm 07/3/2014

    My recollection is that IRBs tend to rigorously review paper submissions and described procedures, mainly for completeness as to form and meeting formal requirements, and aren’t necessarily required to look at the substance or wisdom of the proposed study or do further investigation about potential consequences to subjects or to public perceptions before signing off. I hope this is no longer true (years ago this seemed to be the case in studies in medical situations).

    That there is a great deal of puzzlement, annoyance, and anger in response to this study is not surprising. But I disagree with BillSkaggs that “… what makes this bad is not that FB manipulated the user experience and published the results. If they had done so by tweaking elements of the user interface, it would all be totally innocuous.” I think that points to the deeper issue involving FB lurking behind the rôle of social scientists and Science in the human subject study façade.

    FB is largely unaccountable to anyone, whether it “tweak[s] elements of the user interface” or broadly manipulates the user experience, the information it collects, and its objectives. It uses a “contract of adhesion” – a “TOS or EULA … clearly designed to protect the company, not the user.” In the “free market of ideas” it has increasingly become a monopoly in what it does. Many users have been on FB for years and accumulated personal and professional presences which are valuable to them. Yet they have no say, absence wide-scale complaints picked up by the media which “embarrass” the company.

    FB’s FB page includes this mission statement: “Founded in 2004, Facebook’s mission is to give people the power to share and make the world more open and connected. People use Facebook to stay connected with friends and family, to discover what’s going on in the world, and to share and express what matters to them.” There is nothing in the mission statement about FB not being manipulative, doing no evil, or not causing harm. Indeed, it is far from obvious from FB’s labyrinthine help and support pages where any relevant terms, agreements, or language can be found. And FB apparently hasn’t had anything to say on https://www.facebook.com/fbsitegovernance, its site governance feedback page this year at all!

    I am not sure how we, the users, can improve FB’s behavior. But I really hope that FB takes this as a “learning moment” to think about how it can change its manipulative and user-disregarding image, even for people like me who don’t pay for hyped services.

    Link to this
  6. 6. Diogenes11 6:13 am 07/8/2014

    This only applies if ones ethical system is of Judaeo-Christian origin, religious-based and valuing individual life/rights. Facebook (BP, Monsanto, Union Carbide….) takes the Mugabe-ist ethical position (North Korean Kim Dynasty, Chinese Government, Stalinist….) that “We have the power, we are untouchable, if you don’t like it you can go hang yourself”.

    There is no ‘scientific’ reason to prefer one ethical system over another (we are not on the ‘Humanist American’ or ‘Religious American’ website).

    I don’t see angry crowds of peasants bearing torches, storming the walls of Facebook’s castle (or a co-ordinated denial of service assault on their share price), so the objective evidence is that their user base is not perturbed by their ethical framework. (I was amused when the SciAm website offered me the option to login using my social media account!)

    We’ll all remember their research’s results, along with Tuskegee, just like those of Galileo who offended the ethical gatekeepers of his day.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Dinosaurs

Get Total Access to our Digital Anthology

1,200 Articles

Order Now - Just $39! >

X

Email this Article

X