(upbeat music) - We're delighted to have Dan Carpenter from the Harvard Department of Government. Dan is the Allie Freed
Professor of Government at Harvard University, and he's developed some
of what I personally think is the most interesting work
that we've had on bureaucracies and in the discipline
of political science, and this is related to that kind of work. Dan has done a book, which
I probably should have had here to hold up, but I
bought it on a Kindle, so that kind of shameless
promotion is not possible anymore. - I'll do that. - All right, excellent, excellent. A book on the FDA a couple of years back, and he's here to share some
insights from that work and some projects going forward. So Dan, welcome to USC. - Thank you. So thanks for having me. I was invited to talk today about the FDA and the pharmaceutical industry
as kind of a general theme. So this talk will have basically two components and two purposes. One is to kind of give you a
general overview of the stuff that I've done in this area and to sort of pass along
some general lessons, including those in the aforementioned book that Tony just mentioned. And the second is to
present some new results that we're working on with
my research team at Harvard. And that's the part, so the first part is kind of in red here, and the second part is after including. The first part is largely published, and so you should cite that. It's really the second part
where I'm asking you to be perhaps a little careful
with the citation patterns of what I'm conveying today. So why is pharmaceutical
regulation of interest to students of public policy, to
students of political science, economics, things like that? Well, by comparison with a
wide range of other industries, there's actually much heavier
governmental involvement in this industry, both in the
United States and worldwide. So states and by which
I mean nation states, although sometimes in
some cases as you've seen in California and Texas
with some of the bond issues and referendum passed measures on funding basic things
like stem cell research, states fund some of the basic research that goes into this industry. Most of the applied research
and most of the money that is spent on
pharmaceutical and biotech R&D is in fact private money,
but still it's fair to say that there is a kind of a complementarity and a kind of a mutual
dependence between that work, which is often built off
of some of the basic things that are funded in part by
pharmaceutical companies and private foundations, but
also in part by government agencies like the National
Science Foundation or the National Institutes of Health. The state regulates much
of applied research, what I'm going to refer to as the conceptual power of
the regulatory state. Basically the ability of the government to define the vocabularies, methods and concepts that people use in research. And so much of the history of
the pharmaceutical industry in the 20th century is
actually the history not so much of science developing exogenously from government regulation, but science often developing endogenously within government regulation. I'm gonna show you one
example of that, okay. And so just, for instance, if you want in the United
States to test a drug and to basically transport that drug across state boundaries, you have to get an exemption from the FDA, which is called an
Investigational New Drug exemption or IND, all right. And that is basically a
prerequisite in order to engage in medical research with clinical subjects, which is to say human
subjects in the United States. By virtue of the diffusion
of those rules worldwide, that is now basically a global
order of regulation, right. The state is also a veto player over R&D, so if you're a car company and
you design a new automobile, for the most part the
government is not sitting there at the end of the line
with the ability to veto whether that project enters
the marketplace or not. Right, I mean, you can
talk about different ways in which a wide variety of
markets the state enters: licensing, land use permitting,
environmental permitting, things like that, but again
here it's quite strong and it's product specific,
it's not licensing firms. So Pfizer in some way or Merck
are not licensed by the FDA, although they're kind of certified with the way they produce drugs, but each and every drug
that they would wish to introduce to the market and on which they would
seek to generate profits has to be approved by the FDA, all right. And finally less so in the United States, but increasingly worldwide, once those products are on the market, the state also regulates
their post-market life, the prices they can charge, all right, which we see sort of in
Europe, say for instance, in the United Kingdom through
the National Health Service or for that matter in Canada, but also the way they are distributed. So there's been a couple
of recent developments at the FDA with the distribution
of opioid related drugs and the FDA basically trying to induce pharmaceutical companies to
less tamper resistant drugs. So that hydrocodone and
oxycodone based medications can't be kind of mixed into a soup that is more addictive, right. Now the general theory
that I've been working on, which kind of functions
as a background to this is what I call the theory
of approval regulation. And the idea is, is that the state is, again, in this capacity of
being a veto player over R&D, and that there is kind of a
simultaneity between firms, which seek to kind of develop
profitable investments, but those investments are kind of, the value of those investments is known only with a great degree of uncertainty, and that the firm is also
regulating those investments, but again knows perhaps even less than the firm does about them. And so it's a world where
basically companies are trying to bring these products to market, they are not sure that
their product is profitable, all right, they need to
in part test the product through the R&D process
to be sure of its quality or at least to gain more
information about it, but again, there's this
veto player, right. Now, so the regulator is a
decision maker under uncertainty, which I described as kind
of stochastic, all right. Part of what we do is that
we then model an estimate, statistically a set of
political constraints on the regulator, and when
I refer to the regulator, just think FDA in its
approval behavior, all right. So some of the work that we've done, and this is work that
other people have done too is to describe how even in the absence of political capture or political protection, large firms are often gonna
do better under this process. Why? In part because they're more
familiar to the FDA, all right. People who enter a certain market niches earlier often do better, not again because there's
some capture dynamic, but because the regulator
can approve drugs for say a new cancer therapy as a way of kind of throwing a bone to patient advocate groups
and things like that. So this is summarized in
some work that I've done with the title protection without capture where one gets protection
for larger producers, older firms and older or first
entrance to a marketplace without there being any degree of kind of political purchasing or bribing
of the regulatory process. And then finally, in some other modeling that I've done with Mike Ting, who I referenced on the first slide, we've looked at the
endogeneity of R&D decisions and regulatory approval. And there's been some
subsequent development where we've looked at what happens in this world
to consumer confidence. So basically people
coming into a marketplace, in which there's a certain
degree of screening. So in theory bad products
might be screened out, the products that you do have
that enter the marketplace, there's a lot of data produced about them, such as randomized clinical trials, summaries of those data,
make it into the label. And so the question is,
what happens to consumption? And then there is a more general model that seems to be applied to more to antitrust by two economists, Ottaviani and Wickelgren, all right, but one problem. Do I have this here? I don't have that. One problem here is that we actually lack, we have a lot of data in
theory about this problem, which is basically how
the regulators decide. We have a lot of data and
theory about how firms develop their products in R&D. We have very little data, although now again just
some emerging theory on basically how R&D and
regulatory approval strategies respond to one another. Question, yeah. - So like my impression was
that the FDA approval was they have more objective
process than your clinical trial shows your drug was
statistically significant, and that there is not much
subjectiveness or discretion, but these stories suggests that, are you suggesting that
trials for social diseases that more advocacy or something,
even if they don't show they have a different
special for approval? - Yeah, so if it were
really that objective, I think you'd find less
disagreement within the FDA and less disagreement within
the advisory committees that offer their counsel
to the FDA than you see. So I guess I would kind of
disagree, it's not that I, basically, yes, the
process is very scientific. Yes, there's a lot of
data that informs it, but science, number one, doesn't
eliminate the uncertainty. And sometimes the science
generates more controversy than in fact reduces. So I think the process is
shot through with science and in fact, rigor. I mean, what we know about these
products coming into market probably is greater than just about that for any other sort of form
of industrial organization. That said, sometimes that information can generate controversy and subjectivity. For instance, we'll know
a lot about these products because they've been tested
in randomized clinical trials with thousands of patients, right, but from those trials we
might get a safety signal that suggests that, well, wait a minute, sometimes after 18 months, there's some hepatotoxicity, right, that's developing in the liver, right. How do you interpret that? Do you interpret that as something, which is so important that we
should thereby reject the drug or something that we should thereby or thereafter attach a
warning to the label, right. That's a controversy which
actually is generated by the scientific process, and which is actually not
so much reduced by it. Does that kind of...? Yeah, okay. So the book that I've done, and so if Tony had had the hard copy here, he would have been able to give you this. It was published in 2010,
which tries to unify both historically, theoretically
in a conceptual manner and empirically a large
number of these observations, all right. And so let me just give you two before I get to the sort of newer work. Let me just give you two kind
of lessons from this book that I can sort of talk about. The first is, is that
it's commonly thought that basically the way
that the FDA evolved was in sort of three kind
of crucial enactments. Number one, the 1906
Pure Food and Drugs Act, which gave to the FDA,
actually was then a bureau in the US Department of Agriculture, power in interstate commerce
to govern food and drugs. In 1938, it got this
pre-market approval power, but only for the question of safety, not whether drugs actually
worked, all right. And then along comes the
thalidomide tragedy in 1962, which essentially didn't
occur in the United States because this woman, Frances
Kelsey, held up this drug which was Contergan, thalidomide, which made its way into Germany. There were thousands of birth
defects, things like that, but the usual story is,
is that only in 1962 after that tragedy in Europe that the FDA begin to regulate efficacy. And in fact, people have
used that sort of before and after comparison in
a wide variety of studies and economics and political science to try to essentially
estimate what the effect of efficacy regulation is
versus safety regulation. Well, basically one
historical lesson of this book with a lot of time spent
in the FDA archives as well as pharmaceutical company archives is in fact that the FDA
was regulating efficacy more continuously in kind of an upslope from the late 1940s all
the way up until 1962. So there's no sort of tight
boundary pre and post, right? So here's just an example. Erwin Nelson, who is the
head of the drugs division in the FDA in 1949 gives a speech to pharmaceutical company representatives, in which he says, look,
we want proof of safety. That's what the law says, but we also want proof of efficacy. This is one of those cases where simply by communicating
things in a speech, a federal agency manager
or bureaucrat or regulator can often go beyond what the law says, not in a way that's illegal, right, but in a way that's kind
of non-statutory, right. And so nowhere did the FDA's rules say that you have to
prove efficacy to get it, your drug approved, but increasingly in
speeches they're trying to giving this message. And if you look at sort of the trade journals
during this period, industry trade journals
where they're talking about, what's the best new investments
in the world of chemicals, pharmaceuticals drugs, foods? They're basically complaining
that the FDA is making a lot of these kinds of statements. Like, all right, now we don't
know what the criteria are because we thought it was
safety 10 years ago in 1938, but increasingly it seems to be efficacy. And if you want lots and
lots and lots of those quotes with lots and lots of lots of sites, consult chapter three of my book, which is about 100 pages long, all right. Too long, but it's got all
that data available for you. As evidence of also of this,
basically the FDA began to use, refuse to file or RTF judgments, which is to say we're
not even going to review your drug application unless it meets certain minimum criteria, and I've sort of listed those here. And this was a draft federal
register document in '54, the new drug application
form was finalized in 1956. That's five to six years
before thalidomide hits, and again, the drug efficacy
amendments were passed. And it says, an application, I'm just gonna read this for you, may be incomplete or may be refused unless it includes full
reports of adequate tests by all methods reasonably applicable to show whether or not
the drug is safe for use. That was a way that they
enabled efficacy regulation by saying not just safety
in terms of toxicity. Like do you explode
when you take the pill, but safe as used, right? And that was a way of getting into how was the drug gonna be
used and for what purposes, and with what effects? The reports ordinarily
should include detailed data derived from appropriate animal or other biological experiments and reports of all clinical
testing by experts. Those experts must be qualified by scientific training and experience. That was code for you better have a PhD in clinical pharmacology on your team, otherwise we're not even gonna
look at your application, all right. And it should include detailed information pertaining to each individual treated, including all these variables, results of clinical and
laboratory examinations made. So if you took a blood
sample from this person at the beginning and/or in the
middle of a clinical trial, the results of that had
to be available on paper, not just in sort of a summary statistic, and a full statement
of any adverse effects in therapeutic results
observed, all right. So you have to tell us
the therapeutic results in order for us to even look at your drug application, right. This again was six years
before thalidomide occurred. One of the things that the FDA
was doing about three decades before it occurred in
Europe was literally getting the raw data from all these
new drug applications. What happened in Europe
was companies would send statistical summaries from
their clinical trials, often highly observational,
not randomized. In the US and this predates thalidomide, you'd not only have the raw data in the sense of the numerical dataset, you would have all the paper data from which the numbers were coded, and they would literally
go back and recode and examine the
sensitivity of assumptions. They were literally decades ahead at least in terms of
statistical methodology, replicability of where
Europe was at that time. If you actually look at the
approval time distribution, how long did it take from the time that a drug was sent in for
those drugs which is approved, we're only looking for drugs
which were approved here. How long did it take them to get approved? Okay. You see that in the early 1950s, and these are quantiles of the
approval time distribution. So this is the time by
which down here 25%, the first 25% of the drugs are approved, the first 50% of the drugs are approved, the first 75% of the drugs are approved, and here 90% of the drugs
are approved, right. So this tells you something
about, if you will, the tail or the outer tail of
that distribution, right. If you look in the early
1950s, it's very quick. And in fact the statutory standard is, they're supposed to be
approved within six months, all right, or reviewed within six months. So if approved, then
approved within six months, but you can see here a sharp uptick, not only in the median,
but also the tails, right, whereby by 1960 before anybody
knows what thalidomide is, all right, before there's any idea about officially adding efficacy, the FDA is already at the
median, all right, going through, it's the congressional standards
which were not binding, all right, but at least
were recommendations. Now is this proof of efficacy regulation? No, right, but is it
consistent with the story that the FDA was getting more stringent during this time period. So worth keeping in mind that
if you just estimate this in a sort of regression model,
turning out like basically how long does it take the
FDA to approve these drugs, and you control for the amount of staff that the FDA had at this time, right, the effects get stronger, not weaker. Why? Because actually the FDA staff was tripling during this period, right. - [Man] (indistinct) the volume
of application (indistinct) - Yeah, yeah, so if you
control for all those things, this is, it's pretty clear
that this is something other than backlog
and/or resources, right. And, again, I'm just showing
you some statistics here, this is not just from an
estimation here, right. And again, this is not efficacy per se, it could be a whole bunch of things, but basically it's
consistent with the story, the procedural story that you
can tell elsewhere, all right. Second lesson, so that is, if you will, kind of gatekeeping power, all right. When we talk about the veto
power, the gatekeeping power that the FDA has over the marketplace, this is one form of it, and they get to define
what standards are used separating the wheat
from the chaff, right. And in so doing they're
actually able to define other kinds of standards. So if you read the financial pages, say of The Wall Street
Journal or The New York Times, and you refer to a biotech stock, right, something you're kind of interested in, you'll often hear this,
okay, Verolta Pharmaceuticals had a candidate promising for
non-small cell lung cancer that failed in Phase 2 trials. You might ask, so where does this Phase 1, Phase 2, Phase 3 stuff come from, right? Well, again, this is a
general lesson of the book, consult chapters four and
five, if you want more, but basically this is a
creation of the regulatory state imposed upon medical research
and scientific research, not the other way around, right, and there's a long history that goes into literally when these phases
began to get drawn up, all right. If you look at, for instance, and the key rules were
written in 1963, right. There were a few phase trials
before that were actually sanctioned by the
National Cancer Institute. In fact, most of them run by
the National Cancer Institute. So the story of the development
of phased experimentation, the idea that one not only
runs a test for a drug, but you run one set of tests, successful passage through which
becomes a sufficient hurdle to go to the next set of tests, sufficient passage through which becomes the sufficient hurdle for the
third set of tests, right. This idea of sequential
experiments, right. That is a regulatory imposition, not only on the pharmaceutical industry, but in fact, on the
entire medical industrial university complex in the United States, and in fact worldwide. Every human clinical trial
now that involves a drug, all right, of any sort is
essentially going to be classified into one, two or three. Now there's four, and
there's technically a zero, but those are just further glosses on this basic structure, all right. - [Man] And the origin
of that was the FDA? - FDA, yeah, and you can find original documents in sites there. Again, some ideas about
this were thrown around by the National Cancer Institute
as well in the late 1950s, but the original idea for this idea of sequential experimentation
actually comes out of a pharmacologist, animal
pharmacologist in the 1940s looking at how to test for the safety and nutritional value of
different feeds for livestock. And part of what they're interested in is what's the acute effect and
what's the chronic effect? And if you think about
Phase 1 and Phase 2, it's kind of a development from that. You're looking in Phase
1 at kind of, all right, do people explode when
they take this pill? Do they basically fall over? Phase 2 and Phase 3 are what
are those longer term effects? You're moving from acute
to chronic, all right. Well, again, this is not only or not purely endogenous to science. In fact, if anything,
it's imposed upon science. And if you follow the pharmaceutical
industry, you'll know, for instance, that if a
company is not publicly traded and it's getting its money
from venture capital, the people in that company are often paid by benchmarks, right. Have you met a certain benchmark? Then the money comes in. Well, the benchmark in
a lot of these cases, which is by the way the money that people make in the biotech industry is often the successful
completion of a phase. So literally the way that
pharmaceutical payment contracts are structured in the biotech sphere, for those companies that
are not publicly traded is in fact shaped by these
regulatory categories. So it's not simply
conceptual power in science, it's conceptual power
in science that shapes the structure of industry
and payment contracts. So too if you want to look at
where the big movements occur in asset prices for
pharmaceutical companies, it's often on the announcement
of Phase 1, Phase 2 or Phase 3 results,
often are also approval, advisory committees and things like that. So the major pivots for stock prices, for those companies
that are publicly traded also observe at some level
this conceptual structure. It's been a very powerful,
it's a simple idea, right. Let's just set up a set
of experiments in sequence in seriatim, but it's
affected not only science, everything that goes on, not everything, but most of the things that go on at the Health Sciences Institute here, but it also affects the
structure of business, right. Again, read the book, if
you like more on that. So now I wanna shift gears a little bit to talk about a claim that's commonly made about pharmaceutical
regulation and innovation, and sort of here is the more
speculative part of the talk and also the part that
might be more relevant for pharmaceutical, public
policy, excuse me, communities. So there have been numerous claims made about the effects of this kind
of regulation on innovation. What do we mean by innovation? The number of new drugs, particularly new molecular entities, molecules never before
marketed, never before used in widespread treatment
in any other capacity. And the claim has often been made that this regulation has
reduced those, that innovation. Not necessarily by the way in a way that's net cost beneficial
negative because you could say, well, look, we're getting rid
of all these safety problems, we're getting rid of the crap, could be that we're better off. But the argument has been nonetheless an observational argument,
an empirical argument that in fact, after the
imposition of this regulation things went down, I'll
get to that in a minute. So claims have been made
comparing things before and after major laws, including some of the work that I've done. Claims have been made internationally, so there was this old literature
called "About the Drug Lag" in the 1970s about how these, many of these drugs were
reaching England in particular and some other countries in Europe before they were reaching
the United States, particularly with things
like beta blockers, cardiovascular treatments, right. The claims again are usually
about reduced innovation, although there are arguments
that go the other way and say, actually innovation or the
larger sort of properties of the health system are improved, that sort of go off the
lemons argument in Akerlof. The argument sort of loosely stated as, well, once you start getting
rid of quack cancer treatments or once you yank
tranquilizers off the market as the FDA did in the 1970s, you start to improve the
market for cancer treatments because the bad stuff doesn't
crowd out the good stuff, all right, but again, these
are just a set of claims. The problem with a lot of
these claims is twofold, and I'm gonna separate
what we usually refer to as endogeneity into two senses here. Strict endogeneity in
the sense that basically regulation often responds to
patterns of economic activity, which themselves respond
to regulation, right. That's the endogeneity of
the kind that we can model and that I have modeled
with Mike Ting, all right. So in approval regulation, all these things coming to market, right, only the FDA can't regulate, or at least can't sort of
make a decision on something that hasn't been submitted to it, right, but firms develop and submit according to their expectations
of regulatory behavior. And those expectations
are probably correlated for what it's worth with
a lot of other things that change around the time of regulation. So if you're looking at the
late 1930s, early 1960s, a wide range of scientific
changes going on in terms of pharmacology,
applied chemistry and things like that, all right. The other problem is
non-random assignment, which is the usual thing we care about in these kinds of questions, right. I'm separating that from endogeneity because, again, endogeneity is something at least partially we can model. Non-random assignment,
I don't know everything that might be correlated with
the application of regulation in the new deal in the
early 1960s, in early 1990s, but suffice it to say
if our research design is premised up on a before
and after comparison, well, lots of things might be
correlated with that, right. So here's an example from one
of the most famous studies of Sam Peltzman on the 1962 amendments. And so what he did is he looked at 1962, which was when these
efficacy amendments passed. And he said, well, look,
the actual number of NCEs, which is this series
right here, went down. Now if you, by his production
function the way he sets it, it shouldn't have gone down that much, it should have stayed higher. And so he has a counterfactual, which is the higher one here, and the split between these
two functions occurs in 1962. And he wants to argue
that difference after 1962 can be attributed to regulation and he finds or claims in other work that the cost of this is not made up by better therapeutics, right. Now, this is a pretty influential article, and to give him his due, this
was published in the 1970s, but one might worry about
essentially basing policy on a 14 point time series followed by a 10 point time series, right, and estimating two different
production and functions there. But the second problem is, is that this isn't really
kind of a treatment or an intervention in any way that we can plausibly call
experimental, all right. And again, this is where I think an historical perspective
actually helps to matter. For one, as you notice the
sort of new chemical entities are falling from a peak in
the late 50s, early 1960s, before 1962 happens. And perhaps my chapter and some of my work on the application of efficacy regulation in the 1950s might explain that, but at the very least we don't
have a clean before control after treatment kind of world here, right. If in fact the numbers I
was showing you earlier that basically the FDA is beginning to regulate efficacy here, and we really can't trust a
lot of the kind of judgements that we're making by
comparing things before and after a given date. Again, to be fair, he was writing something
three decades ago. - Does this in some sense coincide well with your previous figure, which showed that there
was a structural break few years before probably
sometime in 1960, '59. And this kind of shows that, yes, there is also a
structural break in the- - Yeah, right, so I think mine
could explain that, right, in part there's two other problems here. One is he doesn't nor do I control for industry concentration, and there's some emerging
evidence from the literature that actually suggests that one reason we've seen a little bit less
innovation in recent years is precisely because of merger
and acquisitions activity. I can reference that separately, and that was occurring heavily
in this period as well. Now you could say, well,
that's endogenous to regulation because people are
facing a tough regulator, they wanna develop regulatory
affairs departments, get big to basically be
able to handle all this. That's quite possible, it's tough to kind of
disentangle and sort that out. I agree actually that if we're
looking for the reason of why we come from this rough
mean down to this rough mean? Probably that smoother regulatory function is probably a plausible candidate, right, but the point still remains that than a before and after comparison
using 1962 is not valid. - Yeah.
- Right, yeah, so, okay. So what to do? Well, here's where we have an idea, and this is a story
that's actually taken from in part the first chapter or
the introduction of my book, "Reputation and Power"
but I'm repeating it here and actually talking about some features that I don't talk about in the book. So you may know of Genentech, it's kind of a darling of the
California biotech industry. It's now a quite big and profitable firm, goes up and down, but it used
to be a tiny little firm, and it had a very small drug called tissue plasminogen activator or Activase, and it submitted it to the FDA and was quite confident in fact that it was going to
be approved, all right, but a Food and Drug
Administration panel in June 1987 basically said no, voted against approval of the drug, right. And basically it wasn't, and it's important to keep in mind when the FDA says no to a drug, it never says we will never
accept this molecule ever. All right, they wouldn't
even do that for cyanide. I mean, legally they can't. What they say is, and it's kind of like if you're an academic and you
submit papers to journals, it's like getting an endless R&R, again and again and again and again without the certainty of ever
getting an approval, right? So sometimes when the journal
editor comes back to you and says, look, next time,
I'm gonna give you an up or down decision, the FDA never says that. And that's actually a
huge source of complaint among pharmaceutical companies, like give us an if then statement, so that if we provide you this evidence, we're gonna do that. Now with some work I'm
doing with a game theorist and another work I'm
doing with an historian, we're actually trying to
tease out why the FDA follows this kind of strategy of ambiguity. And the difficulty is,
is it's very reluctant to kind of commit to a certain
model of saying, all right, if you do this, then we'll do this because then they feel that the firms or other firms can, number one, gain that and just basically come up
with a weak satisfaction of the if part of the hypothesis; and second, that they're setting, and this is I think the real reason, they're setting implicit and sometimes explicit
precedents for other firms. And that's the other reason they do it. I'm not saying by the
way that's good policy, I'm just saying, that's the rationale. I think that we think was going on, but this was bad news
for Genentech, all right. This happened on a Friday, and if you follow government agencies, particularly in Washington they
often announce these things after the market closes,
this was one such example, but when the market reopened
for trading on Monday, right, Genentech stock dropped by about a quarter and about a billion dollars
vanished, just like that, right. And so, this is kind of
interesting for two reasons. One, there were kind of
surprises to this, right. A lot of people did not see this coming, including a lot of people
who had bet a lot of money on Genentech, not just
people at the company itself, but Genentech was publicly traded, right. So, and you can insert if you
want your snarky reference to the Romney victory
party in Boston here, but they actually had planned a company executive victory bash, right, which wilted, and I just
wouldn't be able to write this as well myself into a combination
wake and strategy session. Try that sometime after your next professional difficulty, okay. And then the other thing
is, is there's kind of, if you will a peer or alter effect, a lot of other firms are
looking at this and saying, oh, crap, Genentech just got shot down, now what are we gonna do? Right. And so here's one of these
people quoted anonymously. It's like, well, wait a minute, now the FDA has kind of
changed the ball game here. Something that we thought was a sure thing they've kind of raised the bar or we're not sure where the bar is. So you see what we're getting into. So here's the idea, all right, it doesn't solve every problem
that I just talked about, but it gets at how to assess
the effects of regulation or regulatory decisions on innovation. We're going to use events like this, they come with a certain
degree of surprise. We can measure that surprise in a general equilibrium
financial market, all right. We're then gonna use those
surprises as weights. So every time the FDA makes
one of these decisions, it's going to be weighted
only to the degree that it moves the market. We're going to filter that price to try to get rid of other
contaminants, all right. And then we're gonna use
that essentially to affect what other firms do,
not what Genentech does after it gets its drug shot down, but what other firms do with that? Okay, that's the strategy. And by the way, I think
this is at some level consistent with the larger story
that the book tries to tell because gatekeeping power, and
for those of you who are in political science who study vetoes, right, the power of the veto is not
simply the power to say no to something that comes your way, it's to induce everybody
else who would send something your way to begin thinking twice about whether they want to
send it in the first place. All right, so gatekeeping power is not simply the power of decision, it's the power of induced anticipation. Question? - Doesn't this kind of
regulation or regulatory change? It's not just like the
FDA changed its stance instead Genentech might have disappeared, and that influences the
behavior of competitors. So competitors are responding
both to FDA getting stricter about antibody agents, but
they're also responding to the fact that-
- Right. - Genentech might no longer
be in the market, right. - So there's a set of
complicated effects here, and so for purposes of statistics, what I'm presenting to you is an average across all of those. It's what a statistician would call an average treatment effect of this. That is gonna combine both the
response to the FDA, right. It could be the higher bar,
it could be FDA uncertainty, and it's going to combine the fact that other people might see opportunities, which means that if anything,
I'm probably underestimating these effects upon innovation, right, because what I'm gonna
show you is an average, it's a composite of all those things, but one of those composites is probably, I can't say for sure because
we'd have to net this out, and we're in the process of doing that, but one of the building
blocks of that composite is probably positive, which is to say other firms might see an opportunity here and might actually continue
with their development projects, not pull them back. I do tend to think actually that the way that most firms respond to these things is that the regulatory effect washes out any like market opening. You see that quite commonly
because the bottom line is all these other companies, right, who would wish to get
into the market, who say, ah, Genentech might no longer be there, but if they're gonna
be where Genentech was, take up that niche, they're gonna have to pass
through the regulator too, right. So, again, so what you're saying is very interesting and useful, and basically it's gonna depend on defining the set of
competitors quite exactly. What's the therapeutic
marketplace or niche? What's the mechanism of action? And we're doing that in a
further extension to this, but right now what I'm
giving you is essentially an average across all those. - (indistinct) when its
decision comes through, other firms in the industry,
if they're in Phase 1 or 2 or 3, they're not pulling their drug at that point, are they? - Oh yeah.
- Yeah. - Voluntarily?
- Oh yeah. - They're not going through that phase and seeing how the results.
- No. So I'm in Phase 2, I'm plucking my- - People drop midstream all the time. - They simply not on the
results of that current phase. - The external factor by the way doesn't have to be regulatory. It could be we had a bad budget shock, we had a new sort of Chief
Financial Officer come in, looked at our portfolio of
active projects and said, we don't like this. And if you're going to
make that decision to kill, why wait until something is done. If you think you have
enough evidence already and you're just gonna, you're
gonna make a business decision to say, all right, stop
this clinical trial. Now there are issues about
human subjects protection and things like that that
might extend the clinical trial a little bit further
in today's environment, but again, this does happen midstream. - [Man] But there's plenty of
evidence of drug doing well in Phase 1 or Phase 2
and still being pulled. - Oh yeah, absolutely, absolutely, yep. Now that's anecdotal, I
mean, it's kind of hard to sort of quantify drug doing
well in Phase 1 and Phase 2, we've got some ideas about how to do that, but plenty of examples
where that's occurred. - So when you look at these events, are you looking at events where the FDA decision was a surprise? Because in this Genentech
case, it seems like they actually showed that their drug reduced this particular enzyme
or whatever thing it was. And they just (indistinct)
that means improved survival. And FDA didn't buy the data (indistinct) versus a clinical trial
where it just failed because there was- - We're not looking at those because those would have
happened anyway, right. So we're looking at cases where it's the regulator
associated with an event and we're using the stock market shift as an indicator of the surprise, right. And the idea here is if
we're trying to sort of be kosher with our statistical estimation, we want something that's
both non-anticipable, which is another way
of defining randomness, and two, conditionally not correlated with all the other things
that we're worried about that might be correlated with that, right? So I don't have a
background model here today, but basically here's the kind of approach that we're talking about. So imagine that a firm is
choosing dynamically every moment, okay, in time, DT, if you will, between a certain drug
that it's developing, and this is by the way not Genentech, this is Genentech's competitor, right. Between a drug and a safer investment, which gives you a known return, which we're just gonna call
a put option, all right. And it values this, the value of its investment is stochastic, and it basically is a
function of an initial state followed by an exponentiated X, all right. So this is basically always positive, think this as kind of analogous
to a stock price, right. And this X is gonna be a
what I call a Levy process, what we call a Levy process, all right. And that means it can have
these more continuous things like a Brownian motion or Wiener process. It can also have jumps,
which are these kind of very discontinuous up and
down movements, all right. Now if I give you the
following, and there's a, I'm just gonna wave my hands
at the French mathematician, Paul Levy, if I assume
the following things, independence of the
increments from one another. So given any given history, the
next movement is independent of what came in the past, all right. Stationarity, all right,
so basically the idea that the expectation of
these movements at any time is itself moving in a stationary way. And the continuity, when I mean continuity in
probability of the increments, obviously there's discontinuity
in the jumps itself, but the probability function
describing them as continuous. There's a something called
the Levy decomposition theorem and a set of other results that basically anytime you
make just these three results, you always get a Levy process. The Levy process in turn is
essentially described by, and I'm just gonna wave my hands, I'm being kosher to give the
kind of full equation here, but it's a linear trend, all right, which could be zero, right. Brownian motion, which is
this kind of little thing, butterfly popping around. And then, again, I'm just
doing this to be kosher because there's a knot at one that is, and again in the kosher theory,
you can't integrate over it, jumps, so all this stuff here
is just discontinuous jumps, all right. So every Levy process is
a sum of Brownian motion, a trend in jumps, and each component, the trend, the jumps and the Brownian motion are
independent of one another, all right. So the idea here is this, again, what we wanna do is focus on these jumps, again, just I'm gonna wave
my hands at all this kind of, lovely math and say, that's jumps. What's left over is something
that at least in a reasonably functioning general
equilibrium financial market is already priced in, right. And then noise, right, which means actually there's, every time we observe one of these jumps, a little bit of it is due to this, right. So we actually have a little
bit of measurement error, but we can plausibly claim
that measurement error is itself random or not anticipable, okay. So that's what's happening
for a given firm, but maybe the firm, and this is, again, one of Genentech's competitors, okay. So let's call it genome therapeutics or something, all right. Maybe its decisions
depend on its observations of another firm like Genentech, right. So that the value, alpha is a function, both of its own product, but also some function of
another product, not its own, whose success or failure, and that includes success or failure in the regulatory domain tells that firm something useful about
its own product, right. Now we don't see that other
product as analysts, right, as somebody crunching the numbers, I don't see what's going
on with that other product, but I do see a stock price that's based in part upon that product, right. And what I'm just gonna focus
here is on the negative jumps. And I'm gonna do the same Levy
decomposition I did earlier. Right. If again, it has these properties, I can reduce it to linear
trend, noise and jumps. I'm sorry, yeah, noise
and jumps, all right. So those jumps in theory, and we can actually test
some of these things, should be not anticipable, you can't tell they're
coming ahead of time. One sufficient but not
necessary way of getting there is just to assume a perfect market. If you could know you'd
make a lot of money, therefore you'd make a lot of money, and all that information is
already priced in, all right. But again, it's also, if not
anticipable, uncorrelated, given the information up to
that point in expectation with other bases of firm
information, all right. So I'm gonna make the claim
this is plausibly random, it's not an experiment, but
as you know, plausibly random. So here's the idea, the research design is, we're
gonna use Wall Street Journal stories on FDA rejection,
request for more data, for drugs under NDA submission,
but not yet approved. Right, so we're gonna take these stories, we're gonna compute either the
day those stories come out, the day the FDA makes the announcement or sometimes the company
does or the day after, if that's the trading date that's relevant like the Genentech case,
just the one day shift in the asset price for that
sponsor, the stock price, right. You could say, we should do more, and we've done a little bit of that and we're looking at other filters, but the idea is we want to
capture only what that event had and not some other event that might happen like somebody got fired
or somebody came in, there was a some new
sales figure that came in, we wanna capture only that event, right. We apply that as a predictor to whether all other firm's
development projects, which is to say all the thousands of drugs they're developing happened to get dumped in the months following
or continued, okay. So we observe from the
early 70s to December 2003, and this is actually for
the most part 1987 to 2003 or 1985, most of our analysis
is focused in those 18 years, about 187 of these, right. And if we analyze basically what's the correlation
of those shocks, right, that the shock in the stock movement with a set of things that we can measure, we tend to find not much correlation. So do the shocks get bigger over time? No, they don't get bigger or smaller. Are they correlated with
the beginning price? Because one of the ways
we're measuring these things as the percentage change, so you might be concerned
about a denominator effect. Again, 0.05 correlation, not
statistically significant. Are they partially correlated
with the size of firms that are developing
drugs at the same time? Again, they're not. Are they correlated with
the general movement in the stock market that day? Well, not surprisingly, yes, because on the same day
it could have happened. The Labor Department could
have come out with a report that said unemployment
is going up or down, it could have been some
major market shift. It is correlated, although not a ton, and one might, but one thing
we can do in which we do do, and I can describe this
as we essentially purge our estimates of this general movement. So what we're looking at is
essentially the specific firm's movement purged of the general
market movement, right. And we're working on tests, whether these satisfied Levy properties. So some threats to inference might occur. Let me just sort of
give you a little bit of the soft underbelly of the
research design here, okay. What finance specialists will call volatility clustering is a possibility. And that's the idea that,
well, you can't predict whether the stock is going
up or down on a given day, but if the stock is moving
around a lot, one week, it's been shown that it's more likely to move around a lot the next week. So there's first moment, independence, but there's not second
moment, independence and stationarity in many cases, right. And we are, again, still
working on a purge. Again, what that would
do is not so much change. If this were a problem
would not so much change the sort of the validity, it would change the interpretation of our estimates from one of sort of the FDA is changing its bar, raising its bar or lowering its bar to the FDA is becoming more uncertain, but that's a significant
enough change in interpretation that we want to track that down. The second course is,
the FDA does not report on all of its negative decisions, so you actually have to go
to new services, all right, including The Wall
Street Journal or others to track when the FDA hands
out a negative decision. And the reason is, it's
a complicated exception to the Freedom of Information Act. If you ask the FDA, is a drug from Pfizer currently under review at your agency? The FDA cannot answer yes or no. That is considered
proprietary trade information. You cannot request information
about that application under the Freedom of Information Act. Again, because it's
proprietary trade information, whatever whether that's
a good policy or bad, it sticks, right. So we actually have to look in the news for reports of this sort, and it could be that only
surprises of a certain magnitude are likely to get reported. That does not change the
fact that the day before they're reported, they're
not anticipable, right, but it might change something about the distribution
of what we're observing. And then finally there is
someone who actually knows that these decisions are coming, right, and that's the regulator or the regulators themselves, right. So you might know of Martha Stewart and the time she spent
as a guest of the state. I hope she doesn't watch the YouTube here. She was actually brought up
on charges of insider trading, but actually got convicted
on charges of perjury in that investigation. Sam Waksal was also, I believe indicted, I don't
know whether he went to, I don't know the exact story, but he was also part of that case. Here's a case where an
insider, a chemist at the FDA, all right, knew that drugs
were going to be turned down or delayed, all right, often focused on small biotechs, right, and bet on shares falling
after negative decisions and sold shares to avoid losses. So exactly the kind of
thing that were occurring. If this occurred a lot, all right, like this was an everyday occurrence, and people like this didn't get caught, that would be a big problem
for the research design I'm presenting you because
essentially it would mean that a certain part of that
surprise is essentially priced out or priced into
the market before it occurs because of all this kind
of trading, all right. Reason I don't think that that's, but I'm presenting it
because it is a concern, but the reason I don't think it violates the sort of validity of this
research design is twofold. First off, these people do get caught. Mr. Liang is now serving five
years in a federal prison, all right. Second, the extent to which
they can make money off of this, right, is limited by the degree
that if they traded so much as to cause me as an analyst problems, they would be all the
more likely to get caught. So they can make a lot of
money for an individual, right, they can't make so much money that they begin to really
change the stock price. If they do, they're far more
likely to get caught, right. If this couple of days before this and you see like a 2, 3,
4% swing in a stock price due to one individual's trading,
even the SEC, I'm sorry, but the SEC has been getting
a lot of criticism lately, a seven-year-old with a
spreadsheet would probably be able to pick up that kind of activity and detect the insider trading, okay. So here's what these asset
price shifts look like. This was a fraction change, so if you're looking for percentages, just multiply by 100, all right. So the mean is about a 10,
20% drop in stock price after one of these things occurred. Sometimes there's just
not much of an event, so these are the kinds that get essentially weighted to zero,
it's as if they don't occur, those rejections don't occur. Some of them are companies
losing 75% of its value. Now one of the things you
might be concerned about, again, is that some companies
might be more likely conditioned on this happening to lose more their value than others.
So one of the things we do, in addition to using the raw value purged of the general movement is also to binarize the treatment, which is to say, let's have a cutoff say
right here, all right. Did the stock price fall
more than this amount as opposed to that amount? For what it's worth actually,
that does reduce the error and the models that we estimate
quite a bit, all right. So that might suggest
that there's a lot of extreme bouncing around this
distribution, all right, but we do both, all right. And the other thing we do is essentially we observe a list of
thousands of drug projects that are undergoing development
at a given point in time. And essentially if you've
used Cox models before, we essentially use a
Cox model of duration, how long does it last before
it's abandoned, all right, but it's a little different, in that the analysis is
conducted not only across drugs, but within drugs. And the idea if you're sort
of into kind of epidemiology is this is kind of within
subject treatment, all right. So we're controlling for all the features of the drugs themselves
that are under development. The non-Genentechs, if
you will, all right, but we're looking sort of what
happens within those drugs. As a supplement, one of
the things I'm gonna do is use a linear probability
model, all right, which is basically zero
when the drug is continuing, one when it gets abandoned, all right. Just gonna run a simple generalized Least Squares Regression on that and include a fixed effect
for each and every drug, which is namely 15,000 of them, so it's kind of highly saturated model. And, again, that's gonna turn this into a differences in
differences estimation. And that's also gonna be a
within the subject treatment, all right. So here's what it looks like,
I'm sorry, here's the data. So if you will, the dependent variable is, we wanna find out whether
companies are moving on with their projects toward further testing or submission to the FDA or
whether they're ditching them saying enough of this, right. We have about 14,000
projects under development between the mid late
70s and December 2003, and these are followed monthly. So we've got about a half a million observations in our database. The coverage is better after 1987 because this is a proprietary database produced by pharma projects, right. This is a private company that's been following the
market for a long time that aggregates a lot
of these market reports. The coverages, again, gets better, and so one of the things we
wanna do is say, all right, let's only look at the data
after a certain amount of time and then change that, just to see whether our results still hold up. One limitation and I'm sort of trying to get a grant for this, this is all before the Vioxx tragedy, which by some estimates contributed to 20, 30,
40,000 excess deaths, things like that. There's an argument that the FDA got more procedurally conservative
after the Vioxx tragedy that I think needs to be tested, but we're not gonna see
that in these data, right. We have two different measures
of abandonment, right. One is when the company just
says, we're done with this, and they come out with
an announcement, right. Often companies don't
wanna say those things, in part because they wanna
sort of keep their options open and things like that, so
we have an implicit one, which is where this database reports no development reported, all right. Once that happens for two years, we go back and code it from
the time it originally started being coded as such and
say the drug was abandoned. We use each of these alternatively
and then we combine them. All right, so that we're not
dependent on given one measure. We allow the effects then of
these shocks to be generic which is to say applying to every firm or applying to a firm,
which is a rough competitor or an entrant into the therapeutic niche, say cancer drugs, central
nervous system drugs, cardiovascular drugs,
in which the bad events or the negative news for
one company happened, right. We're defining this class very broadly, this gets to your question
about the competitive effects. So one of the ways we're
gonna do that here, and we could do it much
more narrowly with kind of a refined data on the mechanism of action. Right now I'm just gonna use the division structure of CDER. Now in part CDER by the way is the Center for Drug
Evaluation and Research. It is the FDA bureau that makes these decisions on the drugs. And so, one reason we might
wanna do that is because if, the extent that these
folks are making inferences about the FDA and saying, oh my goodness, the FDA is getting much
tighter, they're not just making a judgment about the FDA generally, but about the particular rule, the particular decision makers
in the oncology division or in the cardiovascular drugs division who may have changed
their standards and said, oh no, no, no, P less than 0.10 is no longer statistically significant, we're gonna say that's P less than 0.05. Or we're gonna demand
another different kind of clinical trial with another
different kind of treatment arm or control arm before we send
something onto the next stage or approve it, right. They might be making in other
words decisions or inferences, not about the bureau or
the regulator writ large, but about sub-regulators
within that bureau, right, which is one way of
actually thinking about possibly a way of kind of
quantifying agency reputations and sort of de-compartmentalizing
or compartmentaling, decomposing the agency writ large. Go ahead. - I'm still not sure that this is, if you could interpret this
solely as changes at the FDA, this could just be scientific surprises. So we're doing a clinical
trial for a certain drug and you were hoping it will
work, but it didn't work, and that changed science and
the stock price plummeted for this company because
everyone thought it would work, but it didn't work, and
it's got nothing to do with how FDA validated it or in
some sense it's a mixture of, (indistinct) exactly, I don't know whether I
would interpret this solely as changes within the FDA. - Well, so it's always true, I mean, so here's the problem, right, is that every regulatory
decision is a decision about the merits of a given drug, right. Now if it's a decision about
the merits of a given drug, right, then we should clearly
see a within firm effect, which is to say Genentech got
this bad news about its drug, they should drop it there. It's not clear that that logic
extends to everybody else including outside of the therapeutic area. - [Man] Like same
mechanism of (indistinct) - Right, so that's, that's
exactly why we're doing this. If you're right, we should observe a lot
of class specific effects. - Or it could also be
like a financial shock. I think you're a VC and
Genentech stock plunges, you're like, I'm out of all biotech, I'm investing in cars instead. - [Dan] Yeah, you're out
of all biotech precisely because the FDA ruled
against your (indistinct) - No, but not because
the FDA rules against you because the science was bad, then there's a lot of hope
that biotech is gonna produce great medicine and Genentech trial fails, I changed my expectations
about biotech more generally. So this is bad science, I need to invest in nano
technology or something else. - Yeah, first off I don't think in sort of a general equilibrium
market, that's gonna happen. I mean, basically especially
with a publicly traded company, right, there's enough other people to say, look, there's a possibility here, and it's possible there's
gonna be an overreaction and things like that. To the extent that it's about purely, it's picking up purely like
a scientific development, first off, that's not
inconsistent with my story, right. Basically this is, the
science is being produced, but the science is being produced and judged by the regulator, by the regulator's advisory committee. So you can view this as
a scientific revelation in many cases, right, but again, this revelation
would not happen in the absence of approval regulation because we've already had the announcement of Phase 1, Phase 2 and Phase 3 trials. This is all after all of that, right. So it can't just be, it could be a further scientific signal, but it's a scientific signal
from the regulator, right, and I think that's the key. The other thing, again,
is, is to the extent that it really is about
mechanism of action, I'm not worried about like the whole world abandoning biotech. I'd be much more concerned about saying, look, in this market like
the FDA is being too tough or we've had this failure, we should see basically high
degree of class specific action and not non-specific action. It turns out we're gonna see both. - And I think since you're basing this on The Wall Street Journal stories, maybe if you have someone
read through those stories and try to say how many of
these stories were about, people complaining that the
FDA made the wrong decision or made a very strict decision. - [Dan] We do that actually. - Okay, I think that-
- Sure. So some evidence for (indistinct) these are very large estimates for when the FDA
has an advisory committee and the advisory committee
votes it down surprisingly, right. And that is consistent with the idea that it's not simply the FDA, but also the scientific advisors giving a negative judgment
on the drug, right, but again, that's not the only place we observe a lot of these. So if the FDA says, no,
look, we want another test or, no, we want a set of other things. And, again, remember, keep in mind, all three phases of clinical
trials have been completed for almost all of these,
at least two have, right. So it can't be just that a
clinical trial previously when... You're right that there
may be some revelation of scientific information still left, but again, that's only coming because we have this regulatory process. So here is the effects of
one of these shocks, right, and I'm just gonna generalize this to say, all right, let's just
imagine one of these shocks is 10% drop in the
(indistinct) stock price. What happens to the hazard rate of abandonment for all other firms? That is to say month by month by month, what's the increased rate at which companies abandon their
drugs given that 10% shock? Now one thing I do here is, is T plus zero is the month of the shock. So one of the things we do is actually we include some leads here, and that's a test of two hypothesis. One, it's kind of what you
might call a placebo test. The idea that these shocks should not be predicting something that
they really can't predict, which is abandonment ahead of time. And it's comforting in
this respect to know that these by the way, these reds are the parameter estimates, these are 95% confidence intervals, both individually and
jointly these are zero, okay. The second is, is this is
a test of anticipability. If in fact, these things could in fact be hedged ahead of time, you should see other companies adjusting their development
strategies in the months before. And again, this is
statistically zero, all right. Where one sees the effects is essentially beginning in the second month, and continuing roughly if you
want to sort of judge that as on the margin of
statistical significant until about the sixth month. It takes time, in other words,
for these to filter their way through firms and their decision processes to make judgements about that. This is by the way generic, this is both therapeutic specific effects and non-therapeutic specific
effects combined, all right. Once you get out here,
there's just enough noise that there's really
just not much going on. If I run that linear probability model, I talked about earlier,
okay, so this is not, this is a little less interpretable. Basically, if you will, this is, what's the change in the
probability of abandonment? Again, we have to adjust the things. It's for lack of a better term, essentially the same results
although a little bit less statistical significance we get these two, T plus two and T plus four. If you actually compare these two, they have the essentially the same shape, even though basically
nothing going on early, right around T plus two to
T plus four arise, oops, and then down to where
there's just a lot of noise, all right, which is
comforting in the sense that basically the linear
probability model relies heavily upon these fixed effects to generate a within subject treatment. So it can't be any feature for
the linear probability model, excuse me, it can't be
any feature of the drug that's currently under development, right. It has to be only the shock that's generating this
response, all right. And, again, notice that the
lead values are all zero. So there's not anticipability here. If I, again, just get rid of all the leads and everything past six months, things bounce around a
fair bit more, all right, but the average of this is quite positive. If you will, each 10% shock, if I integrate over these distributions, each 10% shock leads to about
four to six drugs abandoned in the six months following, okay. We can't say that those drugs would have eventually become approved, we can't say that they would
become useful treatments, so that they would have
been marketed well, all we can say is they have
an increased probability of the firms themselves pulling
the plugs in response to that. Okay. So now if we look within
therapeutic category, we look at this division chart, these are the therapeutic
categories we're going to use. Essentially there's 14 and not 15 because this one is OTC,
over-the-counter drug products, we're not looking at those. So it could be skin and dental, it could be antiviral,
it could be anesthetics, it could be pulmonary, things like that. Some of these names maybe recognizable. Robert Temple is one of
the most influential people in the history of 20th
century pharmaceuticals. Again, he's got a, now a kind of a top level deputy commissioner post, but at this point he was the head of one of these drug reviewing divisions. This guy is often very controversial, is often taken to task in The
Wall Street Journal editorial pages as being sort of a
drag on cancer treatments. And so some of these names
are kind of very well-known. If we look at the effect of the 10% shock in therapy targeted, we get stuff that's very
similar to what we had. It bounces around a bit, but very much similar to
what we observed before. The second thing we can do is say, well, what happens when
we kind of break these events down by what was happening? So let's just examine five categories. And for those of you who do work in statistical text analysis or coding or content analysis, this would be a great application
of those kinds of methods. Basically look at what
kind of decision this was and try to classify it, but it could be a case where
a company abandoned the drug on its own incited FDA regulation
as a reason for doing so, so we code that separately; it could be an FDA request for more data; it could be an advisory
committee voting and saying no; it could be the FDA saying, we're not ready to make
a decision on this yet. Okay. Each of these outside of the FDA saying, we're just rejecting this, all right, doesn't seem to have an effect. Now remember, one reason
might not have an effect is because this is probably the
easiest one to anticipate where the FDA on the deadlines says, we've made a decision up or down, and that the firm is
kind of communicating, oh, we're not getting
great signals from the FDA, so it's not surprising
essentially that that's zero. Technically it might be statistically significantly negative, but I don't put much in it, all right. The biggest effects are from when an advisory committee suggests no. There's a bunch of reasons for that, I would bet or hedge. Number one, that's the first
read on the FDA's thinking and outside committee, which
is going to advise the FDA after these phase trials, all right. Sometimes there's a public today and often there's a public report released by the FDA review or the
FDA review team in advance, but the period we're dealing with, that report was often released
at this meeting, right. So there's a whole bunch of
things that are folded in here. Second, this is a sort of a judgment, not simply about what the FDA thinks, but what a panel of sort of
independent cardiologists who advise the FDA thinks. So this gets in part
to your question about to what extent is this
a signal from science? Well, again, it's both, but here again it's where we're
letting the sort of advisors speak a little bit independently
of the FDA as well, right. It turns out that a fair degree happens just from the cases where the FDA says, we're not ready to make
a decision on this yet. And it's tough to figure
out the reasons for that, it could be that we'd like more data, so we don't think that, we think that it looks good, but we'd like more proof,
a bigger sample size, a smaller confidence interval, or we're just, we're not
ready to make a decision yet. So it could be, the
mail room isn't working, we need a plumbing repair on floor three, something like that,
but that also generates a higher degree of company abandonment and other companies abandoning and citing the FDA as a reason or citing regulatory factors as a reason also leads to about a 4%
increase in the hazard rate. These by the way are
summed across six months, I'm sorry, seven, the month of plus the
following six months. And here's what we do if we
binarize the treatment, right. So this is where we've
taken that stock price shock and we purge it, all right. And then we say, all right,
we're gonna assign a one, if it drops by more than 3%, and zero if it doesn't drop, I mean, it doesn't drop by more than 3%, and then we're gonna sum across 12 lags. And essentially most of this effect is occurring within
therapeutic categories, right. And that's a very large hazard ratio because that is being
multiplied by month across firms many, many times over. So now if we take these
as kind of our evidence, we're talking about 30, 40,
50 drugs getting dropped after one of these events
and not just a few, but keep in mind that some
of this is also occurring generically, which is to say
outside of therapeutic class. So you can't ignore the fact that some people are making inferences, not just about what the FDA
oncology division is thinking, but about the FDA writ large, right. This is specifically coded
as to say, all right, an oncology drug goes down, What is the reaction
of people in cardiology developing cardiology drugs
or infectious diseases drugs? AlL right. And this is a case where
we actually control for a few other things, right. So what do plausibly abandoned
drug projects look like? Well, essentially we
expect those with a shock and then what happens
two periods afterwards? All right, which was
one of the significant, statistically significant
parameter estimates that we have. So we can't know whether
these in fact were caused, we just say it's consistent
with the causal story. These would be predicted
to have a higher level of regulator induced abandonment, okay. So it turns out that over
95% of those abandoned are in Phase 3, which from an efficiency
standpoint is bad news. If you wanted these to be abandoned, you'd like them to be
abandoned early before all that capital is sunk in, right. Now I can't say whether over 95% of drugs that are abandoned are in Phase 3 because we don't have great
data on where these things are. And the further they go in the process, the more likely they are
to be reported at all. So all I can say is for those drugs for which we have phase data,
Phase 1, Phase 2, Phase 3, 95% of these are in Phase 2, but that's highly, highly selected because if you get to
Phase 3 in this database, it's much more likely that
the people who put this database together are able to
report that you're in Phase 3. What you can say I think though is that a fair number
of these are in Phase 3 and are dumped, right. We can't say that 4,000 drugs were dumped because of regulatory factors, right. We can just say that among those that occur in these events right after two, four months after
these, a high number of those for which we know the
phase seem to be Phase 3, all right, and we have to sniff some more to kind of dig where that is. Most of these are, again, implicit abandonments and non-explicit, but if you look at the, and
I can send you the paper or you can even look
at the previous slide, you get very similar results as opposed to whether you focus on explicit abandonments or implicit abandonments, all right. So choosing one or
another of those measures actually doesn't seem to affect much the results that you get
from these estimations, which is somewhat comforting. So to conclude on this part, well, I think this is still speculative. I mean, one thing I'd
like to be able to say is give you a harder estimate of, well, when one of these things happens, the following number
of drugs are abandoned, and they're abandoned in this
phase and things like that. There are some limits on the data, which I think will prevent
us from ever being able to do that in a fully satisfactory
manner, but one can do that. It's also important to say
that this is not an evaluation of what happens in response
to regulation generally like the issuance of a new rule, but the issuance of a
regulatory decision, all right. And that points I think to
the difficulty of measuring the overall effects of a policy because regulations usually
come in bundles, right, and regulatory decisions
usually come in bundles. So you say, well, let's evaluate the effect of this regulation on Y. Well, what part of the
regulation are you picking up? Because the regulation is
probably a statute, right, or a rule with seven different components. And is it component two or component five? So there's a lot of debate right
now about what's the effect of the Dodd-Frank Act
on the financial realm? Well, the Dodd-Frank Act
is prudential regulation, which is to say large banks, it's regulation of credit rating agencies, it's regulation of the
home mortgage market, it's the new Consumer Financial
Protection Bureau, right, it's 20 different things. In fact, really it's more
like 20,000 different things going on in that bill, right. And so, assessing the effect of a piece, of regulation writ large or
a piece of regulatory statute is very hard because these
things come in bundles, and it's very difficult to disentangle one part of that component from the other. And so the more you focus up, the more you basically give
up in terms of granularity. The more you go in terms of granularity, the less you're able to focus
on regulation writ large. I don't think this problem
is fully escapable, right. I don't think it's possible to just say, well, there's a strategy out there that will allow us to speak
about regulation writ large and also to have this
kind of granular approach. This is what I think at some
level political scientists can teach to those who
wish to evaluate policy, policies come in bundles,
and it's hard to disentangle one part of the bundle
from another, right. It's difficult also to
draw policy conclusions, again, all I can say is
that firms are more likely to pull the plug on these projects. I cannot say, right, that these
projects were of high value. We might be able to follow later on in some of these
therapeutic areas and say, were there cost beneficial
new products introduced? What happened to morbidity, mortality, some public health measures in these areas where there were more surprise rejections? We might be able to follow that, but I haven't done it today. And, again, the more you
start to sort of take into to account some of these
therapeutic area specific measures, the more you're beginning to
sort of introduce other areas, which can contaminate, right. There's no way of knowing essentially what the health effects would have been, in other words had these
things gone to market or what the economic
profitability would have been had these things gone to market. That said this method does open the black box a little bit, all right. We know that it's not simply
the FDA rejecting a drug that might lead to less innovation, which is to say the FDA making a decision, no on something that's sent to it, but the effect that that is
having on firm's own decisions not to continue their own
product development processes and not to seek approval for
those projects later, right. It's also potentially generalizable. In theory if you can find regulatory enforcement decisions in other
domains, focus them on firms, compute what happens to those firms as to whether they're
going up and down, right. And then I think this is the key, can you get a large database
with high granularity on what other firms in
that domain are developing? Energy development projects, right, consumer financial or
systemic financial innovation. It's really that dependent
variable kind of data that one needs to be able to evaluate, the stock market data, and then some cases the
regulatory enforcement or decision data is always there. What you really want is a
high granularity database at the level of firm decision-making to be able to evaluate
what happens with R&D. So I'll conclude there and
open it up to questions as, other questions as you like. - Have you done similar research with devices, medical devices? - I have a graduate
student who is doing that, and I may end up joining
her on that project or not, but that's exactly one of the
things where that's occurring. Yeah. - How much control do the
director of the difference (indistinct) have over like what the thresholds they're using? Just I was wondering if you
have data on who is in charge and whether they have a
reputation of being (indistinct) cars or something (indistinct) - Yeah, that's great question. So actually the woman
who just asked a question has a copy of my book there. Thank you. And one of in the historical period, in historical work that I do, I describe this process of sub-delegation. So in theory this power of
veto is given to the secretary namely Kathleen Sebelius,
but in the 1960s and 1970s, it kept on getting
sub-delegated to the fact that you've got career
bureaucrats making these decisions now in a way that's almost never overturned by higher levels. The only case recently
and we talked about this at dinner last night where
there's been an overturning was the Plan B decision
when Obama and Sibelius basically turned down
the approval of Plan B for over-the-counter status, but that's the exception
that in some ways, although I worry about the
precedent that it might set for kind of overturning doing that. It is possible and I
did it a long time ago and I kind of gave up on it to, if you can get approval
time data to net out the effect of different
reviewers basically by like computing a fixed effect
for each reviewer, and then just to examine
the fixed effects. And I just never went very far with it, but I've got all my data
from this book online, not all of it, but a lot of it. And if you went to it, I could probably give you some others. We basically coded the entire
CDER employee directory from the eighties through the early 2000s, so we have like 5,000
employees in this database. And you can see in many cases which one of them did the review, what the review team composition was. And you can net out the
effect of a division director and things like that. That assumes, of course,
that you're controlling for everything else that
might be correlated with that. So in theory that's possible, but I never went so far with
that as to do it in part because there's a lot of missing data on who made the decision in this
case, who made the decision. It'd be easier to do in more recent years because the FDA is actually
pretty good on the whole, given the limits of the
Freedom of Information Act about putting a lot of this data online. - So, Dan, (indistinct) story
about the these approval that's being kind of
endless in our process, but we don't get the signal that, (indistinct) point specifically. Is there anything other
than insider trading that could signal that
(indistinct) to the market, right? Because now you're talking
about the financial markets, so if there's any bleeding
through congressional committees or anything like that? I'm just kind of (indistinct) - So actually, I mean, two things. I mean, there is at some level
kind of a continuous kind of information at least about
the way these things happen. I'm not worried about that
in terms of internal validity because again, that gets priced in. So I'm looking at what happens the day of, what happens the day after. That's another reason for
focusing just on that, one day shock, but I think there's a
more interesting process by which some of this gets to... So one of the other things
that you could actually do by the way is look at what happens to other firm's stock values right after. So I've looked at what other firms do with their development decisions, you could look at other
firm's stock values. The problem is that could
be responding to a lot. And in fact, not least the
regulatory decision itself, like I'm a competitor in this market. Maybe it goes up because
now there's space, but more likely it probably goes down because they have to pass
through the same gauntlet. Now hearings, I'm not so concerned about, but there's a constant
communication between the firm. And so what gets released
to the marketplace, things like that, I mean,
so what we do know is that in theory the review teams
deliberations are lockbox. It's only at or just before
a today an advisory committee that the review team's
memos are put online. There's often a lot of
movement right there. If we had more recent data, we might be able to kind of exploit that. The clinical trials are lockbox
for a number of reasons. One is blinding, right, so you can't inspect
the data halfway through and say, does this look
like it's going well or look like it's not? Although if this idea for more Bayesian clinical trials takes off, you might see more of that,
which could actually create some interesting problems
with insider trading that I really hadn't thought about that. Yeah, that's interesting, but at least again, the
more traditional model now that's lockbox, and
that's in part FDA regs, but it's also human
subjects and blinding regs. There's a lot of
communication that goes on between these review teams. And, again, the problem is,
is if you're a company person and you're holding stock and
you're privy to some of these. The one advantage that the SED has is, it knows who is privy to
that information, right. So it knows who has access
to the database at the FDA, and it knows all the
people at the company. And you will see people at these companies getting hauled into court
and sometimes put in jail for having heard bad information and then going selling the stock or having heard good information ahead and going and buying the stock, hedging one way or the other. But a little bit of it does, there is, I mean, it's a little bit more continuous that I'm stating here. There are some huge discontinuities, but it is a little more continuous. - I wonder what is the most
reasonable cause for mechanisms behind these, say one analogy
I could think of is that among academics, so is (indistinct) the paper rejected by a journal, and it reduces my (laughs) my urge to submit it to the same journal because it lower my expectation, but in your case maybe the problem is there's only one journal, better you do it or you don't, right. So you have no (indistinct)
journals with something too. So that is more a psychological event, or would that be more like what
the gentleman referred to is kind of revealing some kind
of the underlying scientific promise of a certain mindset of thinking. - [Dan] Right. - So what would be your take on that, what would be the actual
causal mechanisms behind it? - Well, I actually think that not all this is purely
rational expectations, right, but in order for my
story to work, I don't, this doesn't have to
have full rationality. To the extent that people are kind of scared off by the FDA
perhaps irrationally, so that they should have continued on. My story doesn't change
because it's a story about the effects of policy. And I do think actually, this mainly comes not from
the quantitative research, but years of looking at
these industry trade journals like the pink sheets and
other things like that, there's a lot of fear in this industry because they recognize
that they're sort of in front of the all powerful regulator. And even though we like to tell stories about the pharmaceutical
industry dominating the FDA, that's number one, a more
recent development where the pharmaceutical industry
has had that kind of power. And number two, firm by firm, these companies are still
very afraid of the FDA and these drug reviewers
and things like that. So I think actually a lot of this is basically being scared off. Some of that fear may be
irrational or inflated and some of it may be rational, which is to say we think
things have happened here. It's hard to really nail the mechanism. I think part of it is
exactly what he is saying, this is a revelation of science. I actually don't think, again, that's inconsistent with the story because that revelation
wouldn't be happening but for the regulatory process, right. In other words, if you could... Just required everybody to
go through three phases, announce those phases
and then go to market, you wouldn't be seeing these effects because the three phase trials are already being priced in once
we've seen this, right, but I think and a lot of this because precisely because
it's happening both within therapeutic class and
outside of therapeutic class is judgements about the FDA. What I can't say here, although
I think I probably could with a little more confidence
with some more data is whether this is the FDA raising the bar or more uncertainty
about where the bar is. And I think that's an
important policy question. My sense, again, just eyeballing the data
is that sum of both, and I'll probably need to kind of do some auxiliary test to kind of do that, but both of those are important questions. You can make an argument that
from a policy standpoint, you might wanna have
higher bars or lower bars in certain points, but
it's always better off to maybe know where the bar
is to have less uncertainty for the industry and for
science and things like that. Although there's an argument that ambiguity can also serve purposes because it doesn't allow the firms to gain the system as much. It keeps them kind of
on their toes as well, but I do think it's being scared off whether it's by uncertainty or by the bar changing, that is probably the mechanism here. - So this is (indistinct)
question that this is in context of (indistinct) but is there evidence (indistinct) are they're
related to other markets (indistinct) structured for? - So here is, again, the
problem this gets to Yan's nice point about there being
one journal editor, right. One reason that the FDA is so powerful is because it controls access to the most profitable pharmaceutical
market in the world. So, yes, if you want, you
can go introduce your market, your drug to the European market, but it's gonna be price controlled. It's gonna be in a country
that's not as rich, and it's actually not aging
as fast as ours, right? So we have high
pharmaceutical consumption, basically zero price controls
at the margins with a couple and maybe with Medicare Part
D, in the future we will, but what makes the FDA
so powerful in this world is precisely the fact that
it's a stringent regulator in a world where price
regulation is not stringent. So gatekeeping power, in other words is directly proportional, the
amount of gatekeeping power to the price that you're
keeping aspirants from, right. And the FDA doesn't control the fact that there aren't pricing
regulations in the US and it doesn't control
the political economy of the United States, but its gatekeeping power
benefits at some level from these other factors. Last question (indistinct) - If there's one more question we might have time for, if not... - Yeah, go ahead. - So I think coming back to
this point about mechanisms, one thing I was struggling with was, what these like, you've convinced that these changes were exogenous that if they were like a random
shop and were a big change. - [Dan] Mm-hmm. - But it's (indistinct) to know whether they were just transient or more problem, problem in the sense, was this having asked for a
reviewer on the review committee and rejecting a trial (laughs) or was this a change in the FDA's stance about their threshold? - So I can say that,
yep, but we could, right, because I could say, all right, was this followed by
leader decisions, right. - And I don't know how to, those leader decisions
are more complicated because as you said because of endogeneity that now you know there's
a higher threshold, you don't take drugs with the FDA, which thin are gonna-
- Right. - [Man] (indistinct) think all that- - But once you begin to analyze
the process in this way, can you at least open
the door to answering some of these questions.
- Yeah. - But I agree I haven't done it yet. I mean, essentially what we wanna do is trace not only the decisions
as weighted by shocks, but a series of decisions
themselves and say, is there a pattern here? And at some level that's kind
of descriptive in their bones, but I think you kind of need to do it, step away from the internal
validity church for a minute, and then kind of focus more on kind of descriptive features in order to get. And that, again, allows
us to go a little more from regulatory decisions
to regulation writ large. - [Anthony] That's great. Right, well, let's thank Dan. - Thank you.
(audience applauds) Thank you.