My husband John and I are philanthropists. That word means different things
to different people. To us, it means aggressive investments
in a number of issue areas: from evidence-based policy
to criminal justice, education, health care, pension reform,
public accountability, and many more. I get asked a lot of questions
about our work: why did you choose
the issue areas that you did and how do you choose new ones? I always feel that when
responding to that question people are expecting me to tell a story
about some transformative event that happened in my life or John's life
that led us to embark on this path. Maybe they want to hear
that we invest in healthcare because we've had family members
who unfortunately have had cancer. Maybe they want to hear a story
about my life growing up in Puerto Rico and how that fueled my interest in and passion
for fighting social justice issues and addressing poverty. At TED, we've come to expect
that every talk will start with a story like that: something
intensely moving and personal. Maybe something that starts with:
"When I was a kid." But I'm not going to tell a story
like that, although I could. I think personal stories are so important;
they are what fuels our passion. For many of us, they're the reason
why we do the work we do. But I believe that they're precisely
the wrong place to look for insight. In fact, I believe that we - as policymakers, government officials,
philanthropists, even as individuals - spend entirely too much time on anecdote
and not enough time on evidence. I'm here today to argue
that we need to change that. That we're routinely making
all kinds of decisions based on incomplete, inconsistent,
flawed, or even nonexistent data, based on anecdote. That this is harming all of us in ways
that we're not appreciating. And that we can and should do better. Now, I've just made a bunch
of provocative statements, so let me get to work at convincing you. I'll start with one example
from the TED Global stage in 2012, the second-most viewed
TED talk of all time. Amy Cuddy - psychologist,
Harvard Business School professor - famous in large part for her TED talk
for her work on power posing. So, she basically argued that,
and continues to argue, that standing in a power pose position like WonderWoman, like this or like this,
whatever makes you feel super powerful, changes the hormone levels in your body and might even be conducive
to greater success. It's amazing, right?
I mean, that's so cool. I should just stand like this
the entire time. The TED community ate this up: over 38 million views
over the last four and a half years. The talk has been subtitled
into 48 languages and counting. Here's how the TED website today
describes Professor Cuddy's work: It says that Professor Cuddy's work
reveals the benefit of power posing, and it makes a reference to this hormone level issue
I just talked about. National news outlets
gushed about the work, and many of us rushed to buy
her best-selling book. Now, I hate to be a downer, but I think we were all duped. Power posing doesn't work -
or at least not as advertised. Look at the myriad studies that have been conducted on power posing ever since Professor Cuddy
released her findings. They point to glaring errors
in her methodology; they question the entire theory
of power posing. Some researchers tried to reproduce
her work and couldn't; others tried to reproduce her work
and got exactly the opposite result: that power posing reduces
feelings of power. And it gets worse. Even Professor Cuddy's co-author,
who herself is a prominent academic, has completely disavowed the study,
and she did so in no uncertain terms. Here's what she said: "I do not believe
that power pose effects are real." (Laughter) And she goes farther than that. She confesses to glaring errors in the way
that they conducted their study: tiny sample sizes, flimsy data,
selectively reported findings. Even Professor Cuddy herself, who continues to stand by
the power pose theory, now has sort of amended her story, and now she says that she's agnostic about power poses' effects
on hormone levels. Agnostic? That was the whole reason she went
on the TED stage and did this and did this and made us all do this and feel powerful. You might think this is a trite example, and it's by no means meant to be
a personal attack. Who cares if I stand
in front of you like this, if that makes me feel good or not? And what does that have to do
with solving the country's problems and the world's problems,
which is why we're all here? Well, a lot. Because this example is endemic of something that we see
throughout academic research and throughout research in general. Virtually everywhere you look,
you'll find researchers, many of them prominent
and most of them well-intentioned, actively misleading us into believing
that bad research is proven fact. We've seen this time
and again in our work. When John and I
started the foundation, we set out to solve the country's problems
by attacking root causes. So, we were pretty new at this,
so we figured the best place to start was to try to get our head around
what we did and didn't know: as a community, collectively,
as a country. What do we know about what does
and doesn't work? After all, philanthropists
have been trying to save the country and make the country better
for many, many years and have spent billions
of dollars doing it. Governments -
state government, local governments - have spent exponentially more
on health care programs, social programs, anti-poverty programs,
job training, you name it. So, we assumed that there had to be
a massive body of evidence that could help guide
our investment decisions. What we found was really alarming. We saw bad research everywhere;
it didn't matter where we looked. For example, we turned to nutrition as potentially an avenue to address
our health epidemics. We wanted to get smart about
what factors and foods might be conducive to chronic diseases:
obesity, heart disease, diabetes. And we found some very good research, but we also found an abundance
of studies like these: "Chocolate makes you skinny." Oh, "beer helps you work out,"
in case you didn't know. "Coffee prevents Alzheimer's." Super catchy conclusions. A researcher notices that people
in her study who are eating chocolate are also skinny and summarily comes
to the conclusion, or at least tells us, that they're skinny
because they eat chocolate. Same thing with the Alzheimer's study. Shoddy research, bad methodologies,
small sample sizes, correlations touted as causations, selectively reported findings -
just like the Amy Cuddy study. By the way, there's a lot of these: one might say
a lot of "alternative facts." (Laughter) In 2012, a group of researchers
randomly chose 50 ingredients from a cookbook: normal stuff like milk, eggs. And they took a look
at the body of research relating to these ingredients
to see what they could learn about whether these ingredients
did or did not cause cancer. Seems like a worthwhile exercise;
here's what they found. For every ingredient
you see listed in this slide - wine, tomatoes, milk,
eggs, corn, coffee, butter - there was at least one research study that argued that the ingredient
caused cancer, and at least one research study that argued that the ingredient
prevented cancer. What are we supposed to do
with this information? We turned to health care: same thing. We saw that the authors of the vast majority of clinical trials
reported in top medical journals silently changed the outcomes
that they were reporting. So, they said they were going to study
one thing, but they reported on another. Now, why would they do that? Look, I don't claim to be an expert. Scientific research
is enormously complicated; certainly clinical research is, as well. But one theory might be that the original studies didn't pan out
the way they wanted. So, they cherrypicked positive findings
from those same studies on secondary outcomes
and reported on those instead, so they could get published. Now, you don't have to have a Ph.D.
to start wondering whether maybe there could be
something fishy going on here. And these shenanigans happen everywhere. Take a recent project that we did:
The Reproducibility Project. We asked researchers to reproduce
100 psychology experiments that had been published
in top psychology journals in 2008. So, go do them again. If you do them again,
will you find the same results? That's what we wanted to know. You know how often they could find
the same results? One third to one half of the time. Now, I'm not claiming
that scientists and researchers are actively and intentionally
committing fraud. I'm saying there's something broken in our system where scientists are feeling the need
to report only positive findings at the expense of the whole story. How would they do that? Well, lots of ways. Let's say that a researcher wants to prove that there's a relationship between
eating blue M&Ms and prostate cancer, because that would be newsworthy. So, he designs and conducts an experiment, but he doesn't find a relationship. Okay, so he does it again:
same experiment. He does it again: no relationship. He does it 17 more times: no relationship. But on the 20th time,
he does find a relationship. Boom - he can publish: "Blue M&Ms cause prostate cancer
a new study shows." He doesn't tell us
about the 19 other times he conducted that same study and failed. He just tells us about the one time
that he succeeded. That's called a "file drawer effect." The researcher could also tweak
the statistical anaylsis until he gets the results that he wants. That's called "P hacking." Or he could report
as to a very narrowly defined group when in fact the original research study was meant to address
a much, much larger group. Why does this happen? Well, there could be lots of reasons, but I would argue that a reason is that the incentive system
in science and in research is broken. In an ideal world, scientists and researchers
would be motivated by one thing: the pursuit of truth. But in the real world, scientists and researchers are equally motivated
by the desire to publish because that's the vehicle
for achieving tenure, that's the vehicle for getting funding
and for achieving notoriety. Scientific journals are clamoring
for articles that report flashy results, so that those articles can get cited. They even have a term for this:
it's called the "impact factor." Then of course, there's the media and we, as consumers, we all want to hear those flashy results, and so researchers deliver, even if it's at the expense
of full transparency and rigor. And we're all the worse off for it. At our foundation, we try to break
this cycle and reform the system by funding organizations
that are promoting transparency and good practices
and collaboration and data-sharing. The Center for Open Science, the Center for Evidence-Based
Medicine at Oxford, the Metrics Center at Stanford, health news organizations that are holding
the media's feet to the fire as to what they report
on research studies. And they're all doing terrific work,
but we need to do so much more. We didn't become philanthropists to hang out with academics and roam
the hallowed halls of universities. We became philanthropists
to change the world. But how can we even think
about what to change if we don't know what works? And how are we supposed to know
what works if we can't trust research? So, this is our problem;
this is our issue. This goes to the core
of who we are and what we do. We can't function if we don't have
a healthy research ecosystem. This problem of bad research
and bad science isn't limited to academia. This is just as bad in public policy. Anyone remember
the "Scared Straight" program? You know, at-risk youth would go
to maximum security prisons or meet prisoners, and the prisoners would yell at them
and tell them about their bad choices and how they had to live their life
on the straight and narrow? We spent millions of dollars
on these programs, and they made intuitive sense
except that they didn't work. Research showed that these programs
actually increased the likelihood that these kids
would commit criminal acts. We should have known this much earlier. We would have saved millions of dollars and, more importantly, maybe we would have saved
some of these kids. Three federal departments - the Department of Labor, HHS,
and the Department of Education - have funded randomized, controlled trials on social programs that have either been
administered by the government or by the private sector. Now randomized, controlled trials
are the gold standard in research. Here's what they found: 70% of the employment
and job training programs that the Department of Labor
looked at had weak or no positive effects. Of the 28 teen pregnancy prevention
programs that HHS looked at, only three were worthwhile. And 80-90% of the education programs
that the Department of Education looked at had weak or no positive effects relative
to what schools were doing already. So we know so little about what works
in education, and in job training, and in policy in general,
even when we think we do. Take the federal government's
"What Works" clearinghouse, which was established
as a resource for practitioners to determine what works in education. It purports to rely on rigorous research. Let's look at this research: There is a study that had
only a few dozen participants. There's another study that was conducted
over the course of around 12 weeks. Then there's a bunch of positive labels on secondary outcomes
that ultimately don't matter. For example, the website labels
as positive a reading preparation program because research showed
that after going through the program, kids were able to recognize
letters of the alphabet. Well, you might say that's a stepping stone to reading,
so that sounds pretty reasonable to me. And it does, except
that the website doesn't tell us that those same researchers
studied that same program and found that the program had no effects
on kids' ultimate ability to read. Well, isn't that what we care about? So we're spending millions
and millions of dollars on programs that at worst don't work
and at best, we don't know, or we don't have sufficient data
or we don't have reliable data. So this needs to change; we need to stop. As a philanthropic community,
as a policymaking community we need to stop funding these programs and relying on this ecosystem
that isn't giving us the results that we need. And I've got some ideas on how to do that,
how to reform the system. First, more randomized, controlled trials. Those of us who work with governments, for governments,
collaborate with governments, need to demand
more randomized, controlled trials so that we can get evidence and understand
what does and doesn't work. Second, we all need
to follow the evidence; this is on all of us. Governments and philanthropists need to stop funding what doesn't work
and start funding what does work. We need to hold our own selves accountable
for these results and for this problem. This isn't something
that was imposed upon us; this is something that we've created
and that's on us to fix. And third, we need better data. We can't do research if we don't have
healthy data systems that are speaking to each other,
that are harmonized, that researchers can access
to give us the answers that we need. Now, here, too, there are excellent organizations
both within and outside the government that are working on these issues. Within government. The Commission
on Evidence-Based Policymaking, the Social and Behavioral Sciences Team, are conducting
randomized, controlled trials, pushing an evidence-based
policymaking agenda within government, and they're doing terrific work. In collaboration with state
and local governments and the nonprofit sector, the Rhode Island Policy Innovation Lab,
J-Pal, Results for America, are all contributing
to reforming this ecosystem by either conducting
randomized, controlled trials or working at promoting evidence. And thanks to the work
of these organizations, and many others, we now have some answers. We now know which programs do move the needle
on things that we care about like child welfare, and education,
and job training, and recidivism. Armed with that information,
aren't we in a better spot - as people who care,
as concerned citizens, as philanthropists, as policymakers -
to make good decisions? We need to follow data. We need to focus more on research, and we need to demand that
of governments and of ourselves. Because policy ideas
that truly are worth spreading are the ones that work. Thank you. (Applause)