Introduction
A primary task of government is to protect people from exploitation. Since scientists are sometimes in a position to take advantage of others and have occasionally done so, there is a role for government to regulate research to prevent exploitation of research participants. On the other hand, excessive regulation can stifle innovation; if scientists are not allowed to try new (and perhaps risky) experimental techniques, science will not progress, and neither will human understanding. This puts government in a difficult position: since research topics, scientific methodology, and public attitudes are continuously changing, it would be impossible to write a single law or set of laws defining which research topics and methods are acceptable and which are not. As soon as such a law were written, it would be out of date or incomplete.
Institutional Review Boards
The United States Congress has decided to deal with this issue of research ethics by letting local communities determine what research with human participants is and is not appropriate according to contemporary local standards. Today, each institution conducting research must have a committee called an institutional review board (IRB) consisting of a minimum of five members, all of whom belong to the local community. To ensure that the committee is kept up to date on current human research methodologies, the IRB membership must include at least one scientist. At least one member must represent the general public and have no official or unofficial relationship with the institution where the research is taking place. A single person may fill multiple roles, and IRBs are also required to ensure that the board consists of both men and women and includes representatives with a variety of professions.
Each IRB is required to review written proposals for all local research on human participants before that research can begin. At most large institutions, the IRB has enough staffing to break into subcommittees to review proposals from different areas. It is the job of the IRB to ensure that unethical research is screened out before it starts. Government agencies that fund research projects will not consider a proposal until it has been approved by the local IRB, and if research is conducted at an institution without IRB approval, the government can withhold all funds to that institution, even funds unrelated to the research.
Informed Consent
To evaluate all aspects of a proposed research project, the IRB must have sufficient information about the recruitment of participants, the methods of the study, the procedures that will be followed, and the qualifications of the researchers. The IRB also requires that proposals include a copy of the informed consent contract that each potential participant will receive. This contract allows potential participants to see, in writing, a list of all possible physical or psychological risks that might occur as a result of participation in the project. People cannot be coerced or threatened into signing the form, and the form must also tell participants that even if they agree to begin the research study, they may quit at any time for any reason. Informed consent contracts must be written in nontechnical prose that can be understood by any potential participant; it is generally recommended that contracts use vocabulary consistent with an eighth-grade education.
Except for the file holding the signed contracts between the researcher and the participants, names of participants generally do not appear anywhere in the database or in the final written documents describing the study results. Data are coded without using names, and in the informed consent contract, participants are assured of the complete anonymity of their responses or test results unless there are special circumstances that require otherwise. If researchers intend to use information in any way that may threaten participants’ privacy, this issue needs to be presented clearly in the informed consent contract before the study begins.
Deception
Occasionally in psychology researchers use a form of deception
by telling the participants that the study is about one thing when it really is about something else. Although it usually is considered unethical to lie to participants, deception is sometimes necessary, because participants may behave differently when they know what aspect of their behavior is being watched. (This is called a demand characteristic of the experimental setting.) More people will probably act helpful, for example, when they know that a study is about helpfulness. A researcher studying helpfulness thus might tell participants that they are going to be involved in a study of, say, reading. Participants are then asked to wait in a room until they are each called into the test room. When the first name is called, a person may get up and trip on his or her way out of the room. In actuality, the person who was called was really the experimenter’s assistant (although none of the participants knows that), and the real point of the research is to see how many of the participants get up to help the person who fell down. In situations such as this, where demand characteristics would be likely, IRBs will allow deception to be used as long as the deception is not severe and the researchers debrief participants at the end by explaining what was really occurring. After deception is used, experimenters must be careful to make sure that participants do not leave the study feeling angry at having been “tricked”; ideally, they should leave feeling satisfaction for having contributed to science.
Even when participants have not been deceived, researchers are required to give an oral or written debriefing at the end of the study. Researchers are also obliged to ensure that participants can get help if they do experience any negative effects from their participation in the research. Ultimately, if a participant feels that he or she was somehow harmed or abused by the researcher or the research project, a civil suit can be filed in an attempt to claim compensation. Since participants are explicitly told that they can drop out of a study at any time for any reason, however, such long-term negative feelings should be extremely rare.
Special Issues in Clinical Trials
Clinical psychology is perhaps the most difficult area in which to make ethical research decisions. One potential problem in clinical research that is usually not relevant for other research settings is that of getting truly informed consent from the participants. The participants of clinical research are selected specifically because they meet the criteria for some mental disorder. By making sure that participants meet the relevant criteria, researchers ensure that their study results will be relevant to the population who suffers from the disorder; on the other hand, depending on the disorder being studied, it may be that the participants are not capable of giving informed consent. A person who suffers from disordered thinking (as with schizophrenics) or dementia (as with Alzheimer’s disease patients) or is otherwise mentally handicapped cannot be truly “informed.” In the cases of individuals who have been declared incompetent by the courts, a designated guardian can give informed consent for participation in a research study. There are also cases, however, of participants being legally competent yet not capable of truly understanding the consequences of what they read. Authority figures, including doctors and psychologists, can have a dramatic power over people; that power is likely to be even stronger for someone who is not in full control of his or her life, who has specifically sought help from others, and who is trusting that others have his or her best interests in mind.
Another concern about clinical research is the susceptibility of participants to potential psychological damage. The typical response of research participants is positive: they feel they are getting special attention and respond with healthy increases in self-esteem and well-being. A few, however, may end up feeling worse; for example, if they feel no immediate gain from the treatment, they may label themselves as “incurable” and give up, leading to a self-fulfilling prophecy.
A third concern in clinical research regards the use of control or placebo treatments. Good research designs always include both a treatment group and a control group. When there is no control group, changes in the treatment group may be attributed to the treatment when in fact they may have been caused by the passage of time or by the fact that participants were getting special attention while in the study. Although control groups are necessary to ensure that research results are interpreted correctly, the dilemma that arises in clinical research is that it may be unethical to assign people to a control group if they need some kind of intervention. One way of dealing with this dilemma is to give all participants some form of treatment and to compare the different treatment outcomes to one another rather than to a no-treatment group. This works well when there is already a known treatment with positive effects. Not only are there no participants who are denied treatment; the new treatment can be tested to see if it is better than the old one, not only if it is better than nothing. Sometimes, if there is no standard treatment for comparison, participants assigned to the control group are put on a “waiting list” for the treatment; their progress without treatment is then compared with that of participants who are getting treatment right away. To some extent, this mimics what happens in nonresearch settings, as people sometimes must wait for therapy, drug abuse counseling, and so on. On the other hand, in nonresearch settings, those who get assigned to waiting lists are likely to be those in less critical need, whereas in research, assignment to treatment and nontreatment groups must be random. Assigning the most critical cases to the treatment group would bias the study’s outcome, yet assigning participants randomly may be perceived as putting research needs ahead of clients’ needs.
The Milgram Studies
Concern about potential abuse of research participants arose in the 1960s, in response to publicity following a series of studies by Stanley Milgram
at Yale University.
Milgram was interested in finding out how physicians who had devoted their lives to helping people were so easily able to hurt and even kill others (in the name of science) in experiments in Nazi concentration camps.
In Milgram’s now-famous experiment, each participant was paired with one of Milgram’s colleagues but was told that this partner was another volunteer. Then each participant, both real and pretend, drew a slip of paper assigning him or her to the role of either “teacher” or “learner.” Actually, both slips always said “teacher,” but the assistants pretended that theirs said “learner”; this way, the real participants were always assigned the role of teachers. Milgram then showed participants an apparatus that supposedly delivered shocks; teachers, on one side of a partition, were instructed to deliver a shock to the learner on the other side whenever a mistake was made on a word-pairing task. The apparatus actually did not deliver shocks, but the learners pretended that it did; as the experiment continued and the teachers were instructed to give larger and larger shocks, the learners gave more and more extreme responses. At a certain point, the learners started pounding on the partition, demanding to be released; eventually, they feigned a heart attack.
When Milgram designed this study, he asked psychiatrists and psychologists what percentage of people they thought would continue as teachers in this experiment; the typical response was about 0.1 percent. What Milgram found, however, was that two-thirds of the participants continued to deliver shocks to the learner even after the learner had apparently collapsed. The participants were clearly upset; they repeatedly expressed concern that someone should check on the learner. Milgram would simply reply that although the shocks were painful, they would not cause permanent damage, and the teacher should continue. In spite of their concern and distress, most participants obeyed.
Milgram’s results revealed much about the power of authority; participants obeyed the authority figure (Milgram) even against their own moral judgment. These results help explain the abominable behavior of Nazi physicians, as well as other acts of violence committed by normal people who were simply doing what they were told. Ironically, although Milgram’s study was so valuable, he was accused of abusing his own participants by “forcing” them to continue the experiment even when they were clearly upset. Critics also claimed that Milgram’s study might have permanently damaged his participants’ self-esteem. Although interviews with the participants showed that this was not true—they generally reported learning much about themselves and about human nature—media discussions and reenactments of the study led the public to believe that many of Milgram’s participants had been permanently harmed. Thus began the discussion of experimental ethics that ultimately led to the system of regulation in force today.
Bibliography
American Psychological Association. “Ethical Principles of Psychologists and Code of Conduct.” http://www.apa.org/ethics/code2002.html.
Boyce, Nell. “Knowing Their Own Minds.” New Scientist 20 June 1998: 20–21. Print.
Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks: Sage, 2014. Print.
Garner, Mark, Claire Wagner, and Barbara Kawulich, eds. Teaching Research Methods in the Social Sciences. Burlington: Ashgate, 2012. Digital file.
Penslar, Robin L. Research Ethics: Cases and Materials. Bloomington: Indiana UP, 1995. Print.
Perry, Gina. Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments. New York: New, 2013. Print.
Rothman, K. J., and K. B. Michels. “The Continuing Unethical Use of Placebo Controls.” New England Journal of Medicine 331.6 (1994): 394–98. Print.
Sales, Bruce D., and Susan Folkman, eds. Ethics in Research with Human Participants. Washington: American Psychological Association, 2005. Print.
Sieber, Joan E. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards. Newbury Park: Sage, 1995. Print.
Slife, Brent, ed. Taking Sides: Clashing Views on Controversial Psychological Issues. 13th ed. Guilford: Dushkin, 2004. Print.
No comments:
Post a Comment