Studying the human mind is a tricky business. There’s still so much we don’t know, and
so many questions scientists are looking to answer. But when researchers are working with human
subjects, they have to balance getting answers with protecting their subjects. In the past, they haven’t always been good
about taking care of the fellow human beings they’re studying. A lot of historical psychology experiments
would be considered unethical by today’s standards. And the foundation of the ethical standards we
use today comes from the 1970s, when scientists came up with a list of rules to protect the
security and privacy of human volunteers. It’s known as the Belmont Report: basically,
three key ethical principles to guide all human research. The first point is called respect for persons,
and it means that subjects have to give informed consent. Anyone who participates in human research
— including psychological research — needs to know the risks and benefits of the experiment
before signing up. The second ethical principle is called beneficence,
and it basically means that researchers should try not to have any negative impact on the
wellbeing of the people who participate in their studies. Basically, “do no harm”. The final point, justice, involves making
sure that subjects aren’t exploited. Researchers should also make sure that the
burdens of the study and the benefits of the results are distributed fairly. In early research studies, for example, the
subjects would often be poor, while wealthier patients would benefit from the results of
the experiment, and that’s not okay. These rules apply to human research in all
fields, including psychology. But the code of conduct hasn’t always been
so clearly defined. And before it was, there were a lot of questionable
studies being done. In the year 1920, a psychologist named John
Watson wanted to show that humans can be classically conditioned — like what happened to Pavlov’s
dogs. Basically, classical conditioning means pairing
a stimulus, like food, that triggers a physical response, like drooling, with an unrelated
stimulus, like a bell. Even though a ringing bell, of course, wouldn’t
normally make dogs drool, when Pavlov paired the sound with food, he conditioned the dogs
to respond to the bell by drooling. Watson and his team decided to prove that
this could be done in humans by classically conditioning a 9 month old baby named Albert
using animals and scary noises. First, the researchers presented Albert with
a fuzzy white rat. As he’d reach out to pet the animal, the
psychologists would strike a hammer against a metal bar behind his head, creating a loud
noise to startle him. Eventually, just the sight of the white rat
was enough to make Albert start crying and crawl away. He’d began to associate the fear of the
loud, scary noise with the fuzzy white rat. So yeah, Albert had been conditioned. But this study failed in a lot of ways. For one thing, it used a single subject and
no controls. So Watson hadn’t really proved anything. But then of course there were the ethical
issues. Watson never reconditioned Albert to not be
afraid anymore, so he was permanently affected by the experiment, and not in a good way. We also don’t know if Albert’s mother
fully consented to the research. Which definitely violates the main ethical
principles of the Belmont Report. And this wasn’t the only horrifying psychology
experiment conducted on children in the early 20th century. In the late 1930s, a psychologist named Wendell
Johnson and his graduate student Mary Tudor at the University of Iowa wanted to know how
positive and negative feedback affected the way children learned language. They decided to test this directly, by giving
kids positive and negative feedback on speech disorders. That might not sound so bad, but there’s
a reason why their experiment is now known as the Monster Study. Tudor recruited 22 children from an orphanage,
told them they’d be given speech therapy, and split them into two groups. Ten of these children — five in each group
— had early signs of stutters. But, both groups also included kids with normal
speech patterns. The kids in one group were told they didn’t
have a stutter. They were given positive feedback: that they’d
outgrow the speech difficulties, and that they should ignore anyone who criticized the
way they spoke. Meanwhile, those in the other group were told
that they did have a stutter, and that they should never speak unless they could do it
right. As you can probably imagine, this didn’t
go very well. The encouragement and criticism didn’t seem
to have much of an effect on the children’s stutters. But the different kinds of feedback did have
a huge impact on their self-esteem. The kids with speech issues who got positive
feedback didn’t lose their stutters, but they did become a lot more confident when
they spoke. Meanwhile, the children who were given negative
feedback became more withdrawn, self-conscious, and frustrated — whether or not they actually
had a stutter to begin with. So for that group of kids, this research was
pretty damaging. As minors, they couldn’t consent to the
research, and the people who ran the orphanage didn’t protect them from the potential harm
of the study. The children also weren’t debriefed after
the project was over, and there was no real follow up on how they may have been affected
by the study long-term. All of these things were later declared unethical
by the Belmont Report. Experiments of course can harm adult subjects,
too. In 1961, a researcher at Yale University named
Stanley Milgram was interested in the psychology of obedience. He decided to see how subjects would react
when a researcher pushed them to do things that went against their morals. The study he came up with is now called the
Milgram Experiment. And it had three separate roles: The Experimenter, played by a scientist in
a white lab coat, was the authority figure. The Teacher was the role assigned to the experimental
subject. The final role was the Learner, a paid actor
who the subject thought was actually another volunteer. The Learner was sent to a separate room so
they were out of sight while the Experimenter observed the Teacher, the subject, instructing the Learner in a word-pairing task over an intercom. Every time the Learner got the word pair wrong,
the Teacher pressed a button to shock them, with the voltage increasing by 15 volts for
every wrong answer. The subject believed they were shocking the
Learner, but they were actually listening to an actor pretending to be in pain, complaining
of chest pains, shouting, pounding on the wall, and eventually going silent. The experiment only ended when the Teacher
had given the maximum 450 volt shock three times in a row, or when they refused to continue. 65% of the subjects did give out those maximum
voltage shocks — just because a scientist in a white lab coat told them to. Milgram concluded that people will obey authority
figures even in morally questionable circumstances, and the experiment has since led to many more
studies on the psychology of authority. But the subjects thought they were actually
listening to someone being electrocuted on the other end of the line, even though they
were told by the Experimenter that there would be, quote, “no permanent tissue damage”. Leaving your subjects feeling like they may
have just killed someone doesn’t protect their wellbeing. And they couldn’t have gotten informed consent,
since warning participants about the experiment would have changed how they reacted. Since then, there have been other studies
that led people to believe they might be hearing someone get seriously injured. In 1964, a woman named Kitty Genovese was
murdered. At the time, newspapers reported that there
were more than 30 witnesses to the murder, and that none of them called the police. We now know that those reports were flawed,
but for a while, it seemed like dozens of people just stood by while someone was murdered
right in front of them. So in 1968, psychologists John Darley and
Bibb Latané at Columbia University came up with a way to learn more about why people
might not act in a crisis, especially if there are others around. They placed college student volunteers alone
in rooms, gave them headphones, and told them that the study was about the emotional issues
faced by students. Each subject was told that they would be communicating with a few other students over intercom to avoid any privacy issues that might come up
if they were face-to-face. But the other students on the line were actually
recordings — and one of those recorded students mentioned early on in the conversation that
they had occasional seizures. Later on in the experiment, that voice would
start to have trouble speaking and ask for help, saying that they were having a seizure. The researchers then measured how long it
took the subjects to go look for help. They found that it took participants longer
to respond when there were more people in the conversation. The subject was less likely to do something
if they believed there were other people who could intervene instead. It’s called the Bystander Effect. Understanding this response is important for
investigating crimes and for protecting communities by teaching people to act during a crisis
instead of assuming that someone else will do it. But, like the Milgram Experiment, there are
ethical concerns about how this research might have affected the subjects after the study
was over. These days, it would be tough to convince
a review board that the potential benefits of this kind of study outweigh the risks. Another study turned out to be so damaging
that it had to be ended early. In 1971, Philip Zimbardo, a psychology professor
at Stanford University, wanted to learn more about how being placed in different social
roles affected the way people behaved. He decided to simulate a prison and cast volunteer
subjects into the roles of guards and prisoners. 24 white male college students were recruited
into the study and separated into two groups: prisoners and prison guards. Zimbardo acted as the prison superintendent. The prisoners were searched, then given ID
numbers instead of names to dehumanize them. Meanwhile, prison guards were given uniforms
and clubs and told to do whatever they had to do to maintain order, giving them power
over the prisoners — and a sense of superiority. The study was supposed to last for 2 weeks
but was actually called off after just 6 days because the conditions in the prison went downhill
so quickly. One prisoner had to be released from the study
even earlier because the conditions in the jail made him panicked and disoriented. Other prisoners started a revolt because the
guards had treated them so badly. After that, the guards became more and more
abusive, giving the prisoners physical punishments when they misbehaved, like forcing them to
sleep on concrete and to strip naked. In the end, Zimbardo concluded that the subjects
had internalized their assigned roles. The prisoners became submissive, while the
guards became aggressive and abused their power over the prisoners. You could not do this study today. By acting as the superintendent, it was impossible
for Zimbardo to stay impartial. That’s a pretty big flaw in the study’s
design — he was invested in the outcome of the research. Zimbardo also allowed the guards to subject
the prisoners to serious abuse, and may have caused them real, permanent harm. So, again, that whole wellbeing thing was not really taken into consideration for this study. Like the rest of the studies on this list,
the Stanford Prison Experiment would not be considered ethical these days. But psychology’s sometimes-dark past has
helped scientists realize that they have a responsibility to protect the public and the
subjects of their research studies — which is why ethical standards are an important
part of modern research. We want to understand the human mind, but
in the process, we also have to protect the minds being studied. The standards laid out by the Belmont Report
help us do just that. Thanks for watching this episode of SciShow,
which was brought to you by our patrons on Patreon. If you want to help support this show, just
go to patreon.com/scishow. And don’t forget to go to youtube.com/scishow
and subscribe!
For those interested, a modified version of the Milgram experiment (#3 in the video) was conducted by Dr. Jerry Burger in 2006; the journal article can be found here.
In order for it to pass modern ethical standards, a number of changes were made to the experiment. The primary one was to stop the study right after the 150-volt mark. In the original study, the 150-volt mark is where the learner shouts "...I told you I had heart trouble. My heart’s starting to bother me now. Get me out of here, please. My heart’s starting to bother me. I refuse to go on. Let me out." In that study, 26 out of the 33 participants (79%) that went past 150 volts continued all the way until the end (450 volts). Therefore, Burger concluded that seeing what participants did at 150 volts gave a pretty good idea of whether they would go all the way had the study continued.
Other steps taken to ensure the study was ethical:
The results of the Burger study showed that 28 out of 40 participants (70%) wanted to continue after 150 volts. This was a smaller percentage than Milgram's study, where 33 out of 40 (82.5%) actually did continue after 150 volts, although the difference is not statistically significant.
Because 6 of those 33 stopped the experiment in between 150 and 450 volts, it's reasonable to assume that a similar minority of participants would've done the same had Burger's study been allowed to continue, but we cannot say that with certainty.