- Okay, well Jason set me up nicely and Miles and Joyce, thank
you, because you really, you guys also set me up nicely. - [Woman] You paid us well. - Right. Right, right. Then pay off work, right? So what I'm going to talk
about is abductive reasoning. How many people here know
what abductive reasoning is? Really? Okay. Because when I first heard about this, I started working with AMD
right from the start before, before anyone really had any idea what AMD was supposed
to do, and that includes the board of governors that approved it. Andy Van de Ven, throughout
this idea of abduction and I had the hardest time understanding the connection between
science and kidnapping. (audience laughs) So it's nice to see that people
have an understanding of it. I'm no philosopher. I don't know the first thing
about Aristotelian philosophy, which is where this comes from, actually, this notion of abductive reasoning. So that should be up front, right? I'm not an epistemologist. I'm not a philosopher, that
stuff puts me to sleep. I had to learn it real
fast when I was appointed editor of AMD, just so I
could speak intelligently about what we were doing and
try to get an understanding of what I want to do with this journal. But I have some experience with problems in science, not publishing,
problems in science and management, from my experiences in AE at Academy of Management journal. And the experience that really shaped my orientation towards
science and what I want to do with AMD was the paper by Dave Hackman. And I'm sorry if any of you have ever heard this story before,
because I tell it frequently. Because it was an important experience in my life as an academic. So you've seen this paper,
Dave Hackman wrote this paper with some other people,
has three studies in it, all using different
methods, looking at bias, gender and racial bias
in customer assessments. Customer service, you
know, the phone calls you get after you finish a call center? And really robust powerful findings about how customers are
biased in evaluating employees and what the implications
of that could be, obviously. And across these three
different approaches, three different levels of analyses, consistent findings,
really strong findings. Sent it out to reviewers. Really nice consistent
feedback from the reviewers. Reject, reject, reject. And I was stuck because
this was important stuff. Where was he gonna publish this? And this needed to be in AMJ. So I did exactly the
opposite of what Jason was talking about as an AE. And I started thinking about
where can I get a theory for these guys and
where I can get it fast? (audience laughs) And my decision letter, and I did this a couple of times as an AE with AMJ where my decision letter
was, here's a theory and here's how I actually
think you can retrofit your paper around a theory and actually meet the criteria so that
we can get it through past these reviewers at AMJ. So they listened to me, I don't know why, because this is not my
area of research per se. They listened to me and
they kind of took my ideas as a basis and expanded beautifully on it. And we managed over the
course of several revisions to get this by the reviewers. And it eventually got published. And get this. It won best paper in AMJ for that year. So, start with theory. There's no question, I
completely agree with Jason. But there's also this
question of what risks come with starting with theory? Especially in a context
that I agree with Miles, that is where their incentive structures, such that after you invest so much time in starting with theory
and collecting data, sometimes years and not to mention thousands of dollars,
sometimes millions of dollars, and it just doesn't work out. Then what do you do? And that's where we get into
the problems that we're seeing. And that's what this
whole conference is about. My argument is with all
the various solutions, methodological solutions,
some of which I understand, most of which I don't,
that we can come up with that might help us solve our problems. These are just new solutions
that could be gained. And we're not necessarily dealing with the problems in our science that may actually
provide the real solution to the problem that we're talking about. So it's very consistent,
Miles, with what you're talking about, how can we solve the root problem rather than just solving
some of the symptoms? And my argument is that when
we think about the reasoning behind the science, we may be able to find a solution, a partial solution, perhaps to some of the problems that we're seeing. Essentially, when we do
theory-grounded research, we're building on this
hypothetical deductive model, which is how we've all be trained. And the focus of that model is rationality and certainty and objectivity. And we kind of build on
these two modes of reasoning, which are induction and deduction. Now, our notion of induction
is a little bit off. I'm gonna be talking
about classic induction, from an epistemological perspective. From a philosophical approach,
what is really inductions opposed to what we call
induction in management. But these are the two
approaches that we use. This is all gonna be
familiar to all of us, but this is kind of putting what we do in philosophical terms. It's kind of cutting off on
the bottom there, isn't it? Oh well. So normally, what we do is
we take a general principle that we come up with on the basis of logic or prior findings or a
combination of the two, and we argue that this general principle is going to be applicable
to a population, right? That the principle should be
applicable to a population. So if we draw a sample
from that population, then we should be able
to find some prediction to hold true in that sample. And this is about 80% I would suspect of what we publish in management. It's this kind of model. Trying to predict what
we're going to be finding in a population by looking
at a sample that's randomly drawn from that population. This is what we do. Or what we think we're doing. But ultimately, the goal of this is to say with certainty that we can confirm that whatever this general principle is, it holds. So it's a dichotomous outcome. Either it holds or we've
just basically shown that no, it doesn't. So this is mostly what we do. These general principles
have to come from someplace and often it's like I said,
logic, prior findings, combination of the two. We also like to talk about
going the opposite direction and this is useful in coming up with these general principles. We can look at a sample and
we can go from the sample to the population, rather
than what we just talked about in terms of deduction. So going from the sample to the population with induction, we're actually
gonna be drawing inferences from the population, from the sample, which we then may generalize
as a general principle to the population. Make sense? Now, when you look at it from
a philosophical perspective, most of this inference, what we call, in classical induction,
not what we call induction, is still framed on the basis
of some over-arching theory. So if you look at most
of the work in management or social sciences and induction, the investigator is still
going into that sample with some sort of over-arching
perspective or theory. They kind of have an idea
of what they're looking for. They want to get a better
sense of how this works. What are the mechanisms? What are the boundary conditions? But they're still going in
with some sort of a framework. So Jeff and I were talking
about this a little bit this morning and Jeff was saying, "Well, I've got control theory, "and it's this nice broad theory, "and I can take it in a
million and one directions." And this is what we would
do with inductive reasoning. We will take this broad
theory and we'll say okay, in certain situations, how does this work? What's the probability, and
this is what I'm gonna try to understand from the
sample to generalize back to the population. What's the probability
that it works this way in this particular type of context? So this is very much,
Steve, in terms of what you were talking about earlier, the context notion, how
do we actually get a sense of how contextual factors affect the general principles
that we're talking about? This helps us go from broad theories down to these specific models. Great way to do it. But think of what we're doing. We're still starting with
an over-arching approach. Now, these two approaches,
I would argue in most cases allow us to do with what we
want to do in management. But they kind of limit us when it comes to new things that may
be emerging around us. New phenomena, new
patterns of relationships. And these things are constantly emerging, whether it's because of changes in society or politics or technology, you name it. And if we're approaching
these new phenomena with these set ideas, we have a limit in terms of the types of
discoveries that we can make. And science is all about
these discoveries, right? This is the big stuff. So this has to take a
slightly different approach, I would argue. I hope it doesn't feedback,
because it's left at talk for a second, but okay. We'll see. So this new approach, which
is not a new approach at all, was proposed by a guy
named Charles Peirce. Charles Peirce, I was
actually reading about him this morning, because I
kind of got interested in who this guy was as opposed
to just reading his stuff. He's an American pragmatist. He's actually the father
of pragmatist philosophy. Grew up the son of a
Harvard mathematician, became a Harvard mathematician
philosopher himself, then he went broke. He literally went bankrupt. Really interesting character. But he proposed that
looking at Aristotelian philosophy, that neither
deduction nor induction can actually lead us to new insights. We need a different approach
which is empirically driven. Driven on the basis of
empirical explanation. And this mode of reasoning gives us first, what he called first suggestions. It's only through the data,
through these observations that we can get first suggestions. We're not confirming anything. We're just getting ideas. And based on what we observe,
we can provide explanation. And that explanation may show a link between existing theory and it often does and what we're actually
observing for the first time. So there may be no reason
to start thinking about new down the road theorizing. Or it can suggest that maybe we have a pattern of relations here
that's very, very different from what we thought about in the past in terms of existing theory. That we need to expand upon further. And what we see manifest in the data is giving us some clues as
to how it may perhaps work. So these are two different approaches of how we can go from
the data to the theory. It's not induction,
because remember induction is starting with a pretty
strong pre-conception of where we want to go or what
framework we're working in. This is a first suggestion. How does this work? Well, we do this all the
time in other fields. So in other areas of science,
whether it's physics, or medical science as
I'm about to show you now from a premier medical
scientist named Dr. House. (audience laughs) This is how these fields work. We see this in AI. This is the basis of AI. And real risks in AI, because
Google is always coming up with these new weird relationships of how drinking more coffee,
as you're sipping your coffee, is related to, I don't know,
all sorts of psychosis, don't worry about that. (audience laughs) And we have to check that out, but there's firm evidence
in these big data that that's the case. But this is sort of the model
that I'm gonna be proposing to you. How we can look at phenomena
and use abductive reasoning, sort of this contrastive approach, comparative approach, testing
with sort of mini hypotheses what I call deduction in
the service of abduction to try to get closer to
what an explanation may be and to create new knowledge. So here's how it works in medical science. Okay. There we go. - Fever's 106, she's
in full rejection mode. - Is that supposed to surprise me? - Her white count is normal. - Normal is not normal. She's been on steroids, transplant team gave her a cocktail of immunosuppressants. She hasn't slept in over a week. Her white count should be in the tank. - Looks like the problem
is some sort of infection. Probably caused a hyptotension,
shocked the liver. - [Peter] Mini hypothesis. - We should start broad
spectrum antibiotics. - Yeah, you might want
to add some chicken soup. It'd be just as useless,
but it's got chicken. - [Peter] We better go
on to something else. - We need to know exactly
what kind of infection we're dealing with. What infection causes
sleep disturbance bleeding. Movement disorder, organ failure, and an abnormally normal white count. - What about tularemia? - Chest was clear. Tularemia doesn't cause
movement disorders. - It would if she developed meningitis. - There was no ulcerations in the skin. With the bleeding, it looks
more like leptospirosis. - Without conjunctivitis
and elevated creatinine? - [Peter] Alternative explanation. - What about typhoid or some
kind of relapsing fever? - Makes sense, if we were in the Sudan. - We sure she hasn't
been out of the country? - She hasn't even been out of
the state in at least a year. Neither has Max. - Maybe she lied. You talk to her friends, neighbors? - You don't know? Come on, if you don't stay
up to date on my notes, where's your next article
going to come from? - You talk to the dog? - We're not as up on foreign
languages as you are. - Has the dog been traveling? - It came from a breeder. - Where? - I don't know. A place called Blue Barrel Kennels. They only had the thing for like two days. - Blue barrel is a kind of cactus. You see many cacti in Jersey? - [Peter] He already has some sort of idea from the data after ruling out a bunch of alternative explanations
where this may be going. Now comes some testing. - Want to see a magic trick? Oh, no. Where'd it go? Where'd it go? Is it here? No. What about here? There it is. Oh, that doesn't look
anything like a nose. - [Nurse] That wasn't there this morning. - Get that to the lab and call CDC. - And tell them what? - That we have a patient with the plague. - The black plague? - [House] Looks that way. - Plague is carried by rodents, not dogs. - Where there's dogs, there's fleas. And if they hail from the southwest, then those fleas can't tell the difference between prairie dogs and puppy dogs. A small percentage of plague cases present with sleep disturbance. - [Peter] It's all coming together. - Imagine an idyllic river of bacteria. Okay, it's not idyllic for
her, but it serves my purposes. The steroids and immunosuppressants acted like ea big honking dam
across the river, physics 101. Put a dam up in front of a
raging river, the river rises. By stopping the immunosuppressants,
we blew up the dam and a 100-foot wall of bacter
flooded her lymph nodes. - We better find out
where that dog is now. - After we start the immunosuppressants and fill her up to the eyeballs with streptomycin sulfate,
gentamicin, and tetracycline. Use a garden hose if you've got one. Get yourselves some
prophylactic treatments as well. - I've got the plague? - Don't worry, it's treatable. Being a bitch, though. Nothing we can do about that. (Peter laughs) - So that's House, okay? But you can see the sort of process that we actually see written
up in medical journals. My wife is a physician
and medical researcher and there have been several
papers that she's published and in very prestigious journals, they're case reports. Where she actually describes this process. We thought it was this,
but this wasn't consistent with these observations, which suggested that it could be something else. We ran these tests, they were negative. But that suggested another
direction, which we took. And it's a process of,
again, these many hypotheses which are tested and ruled out. There's no theory from
the start, but in the end, they come up with some
sort of an explanation. So now the idea is, how does this apply to what we do in management? Because this is what we see
in other sciences as well. We don't do this in our
field, hardly at all. Which is unfortunate. Or we actually do do
it, but it's disguised. We only publish the end
part of this process, none of the stuff leading
up to it is ever discussed. So if we think about these three different logics or modes of reasoning that we use, implicitly or explicitly in management, there are three of them. The one that I think is underrated but actually serves as the
basis for most of what we do is that third one, abductive reasoning, which gives us the
weakest knowledge claims. But I think that's largely what we do. So even though we're saying
we're confirming with certainty from our findings, some general principle that we see evidence of in a sample, largely what what we're doing is saying, "Okay, I saw some evidence of this "and I think this is plausible. "This is one possible explanation, "and we need further research to see if this really the case and where it fits "and where it doesn't fit." The idea behind this is
that there are differences in terms of how we use theory. Certainly, when we talk about deduction, we're talking about theory
as this general principle that we're using as a basis for
disconfirmation of the null. The certainty element. When we talk about
induction, we're talking about a focus more on
probability than on certainty. And again, with abduction, we're talking very much in terms of, we're
in the realm of plausibility. And giving us insights for
down the road theories. What are the patterns that we observe that may enhance our understanding and help us resolve
all sorts of anomalies, all sorts of discrepant
findings that we may see. So if we're using this approach, a lot of the concerns
that we've talked about suddenly become less concerning, whether it's harking or some
of the many other things that we've discussed. So I want to talk a little bit more about the different forms
now that we've established in AMD, that AMD is really the journal for empirical exploration and for research that's grounded on abductive reasoning as well as induction, but
mostly abductive reasoning. I want to talk about some of the ways that we actually see this
being used in management. From my limited experience with it, I've already identified two main forms and a couple of different approaches. So the two main forms, and this is going beyond Charles Peirce,
this is now Peter talking. The two forms that I see,
and I put labels on it that might appeal to some
of the strategy people here. The two forms that I
see are, the first one is exploitive abduction, which is looking at phenomena that are
new and trying to find some theoretical explanation for them in the context of existing theory, which I think is preferable considering that
we have so many theories, most of which have never been tested. Likely that if we pull
Jason's handbook of theories off the wall, we will find
some theory that we can use to explain whatever new
phenomenon we're seeing. But again, that's not all that different from what House was doing. Because bubonic plague has
been around for awhile, and there's some sort of
theoretical framework for that. We know what the means and
what the observables are and how that disease emerges. There's nothing very new there. So it's a matter of
looking at this phenomena that we can't explain off on the surface and trying to find a link
between that, perhaps, and some existing theory. And I'll give you, I'll
show you an example of a paper that does this beautifully that we actually published
about a year ago in AMD. The other approach is this
explorative abduction. I'm still getting used to the terms. Which is the classic
form, more of the classic kind of Peircian approach to abduction, where we're exposed to
some surprising findings. We didn't expect to see this. And this was a really nice thing to be able to do, when you
spent a million dollars in a study and nothing pans out. All of that a priori theorizing
that went into your study, that Jason was talking
about, doesn't work. And you got this big data set. And you've got to justify the time and the effort and the money. And you start looking at the data set and wow, this is kind of weird. There's something weird
here and interesting that I can't explain. Nor can any real theory explain. And you start putting
in one control variable after another control variable. You start playing with
the subsets of the data, no matter what you do, it's still there. Something is going on. No matter what you do to get
rid of it, it's still there. Now comes the interesting part, how can you possibly
explain this phenomena? First of all, how can you be sure that it's not just an
artifact of your data? And assuming it's not just
an artifact of your data, what can you do to better
understand what this is? This is a process of
explorative abduction. You're using a very
very different approach to coming up with an explanation, but it's not an
explanation with certainty. Yeah, you're gonna be coming
up with working hypotheses. But the result of your
testing of those hypotheses is going to be a knowledge statement, a plausibility, and not
a knowledge statement based on certainty. Make sense? So there are a couple of people that have actually done some
of these types of studies. You may recognize some
of these folks up there. I think there are several
types, or several approaches, to these two different forms
that I've talked about. I'm gonna let these
guys talk for themselves and explain sort of how they do this. In AMD, we kind of build on this notion of being an electronic journal, so one of the things we
have is with every article, we have these really cute animations, which you'll see in a second, as well as an author's statement talking about how they
actually did the study, which I think is really,
really interesting. See how they came to do the study and how the study was actually executed and the twists and turns they took as they were doing the study. These are the elements of transparency that we were talking
about earlier as well. We see some of this stuff. So this deliberate abduction is a form of, it can take
either of these two forms. It could be either
exploitive or explorative. The deliberate form is there's
a phenomenon out there, it's a new phenomenon. In this case of Mike Pratt
in Pratt and Rockmann, it was hoteling arrangements. And how identity is
generated in the context of hoteling, our common
theories of identity don't hold. And you'll hear him talking about this. How can people develop a sense of identity with organizational identity,
when they're not really working in a physical organization? So that's one example. Another approach is
opportunistic abduction, where we go in looking to study one thing, but the data are pointing
us in another direction. You go in to study A
and B is being observed and it's much more interesting. You weren't planning on studying B, because you had no idea
that B even existed. What do you do? Do you ignore B and keep
focusing on testing that theory? Or do you start exploring B? So Steve Barley will tell us what happens when you go in that direction. Let's listen to both. This is always a challenge. (Peter laughs) - [Mike] Authored piece, so my answer differs a bit from my
coauthor Kevin Rockmann. For Kevin, his interested stemmed from watching his dad work at KPMG. And this is around 1999, 2000, when they were using hoteling arrangements to assign offices. In a hoteling arrangement, you actually don't get an assigned office permanently. You actually have to go
to a front reception area and essentially check out a desk. And what Kevin found was that his dad was not particularly happy
with that arrangement. And so that was kind of
his personal motivation of looking at distributed work. For me, I think it was
combination of things. I had originally become interested in distributed work, given my interest in organizational edification. If you look at literature for why people attach themselves to collectives such as organizations, things
like physical proximity were important, the presence
of organizational symbols and artifacts. Your creation of strong
interpersonal bonds, these have all been really central to figuring out how individuals connect to their organizations. But these things become problematic when you don't have people
working in the same location. So I was curious about how individuals remain connected with their organizations without actually working at
a central business place. And this was a problem
that actually came up with Amway distributors, because
they didn't have a common business location either. Yet somehow they helped the editor, how do we manage these people? How do we work? And how do we connect
them to the organization? And so what Amway did, is they actually, I mean, cheated in some ways. They just had a lot of events where people got together face-to-face. So there really wasn't any substitute for not seeing each other. And I was thinking, well,
what do organizations that have all this technology where people work remotely, what do they do? How do they manage this process? So that was kind of one
of the motivations for me. And so the question we
end with the study with, was how do people who
spend varying degrees of time working outside
of a business location experience their connection
to the organization? So that's where we started. - Okay. So listen to what he said at the very end. That's where we started. So typically, in a hypothetical deductive, a paper based on the
hypothetical deductive model, where do we start? With a hypothesis. And what you're hearing here
with abductive reasoning, we start with a question. We start with a research question for which we don't really think we have a very clear answer, or there may be competing answers based on inconsistent
findings in literature. There could be discrepant theories which say, "The answer goes this way." Which means we have to ask the question, how do we explain these
discrepant findings? Let's listen to Steve Barley, which takes a different approach. - [Steve] If I was to do the study again, what would I do differently? Well, I think what I
would do differently now than I did then was if I had been able to process the field notes earlier and discover that the real story wasn't about Toyota and Chevrolet, but was really about internet
sales versus floor sales, we could have begun to concentrate on the comparison of the internet sales and the floor sales much more quickly. But what surprised me, if I
was to do the study again, what would I do differently? - Sorry. Okay, so you got the idea, right? He goes in to study the differences between these companies
and the way they sell cars. But what he observes is that
what's really interesting is how we're moving
towards this new technology of selling cars over the internet. And we don't really
have a good explanation of interface between
customer and salesperson with the internet coming
sort of in the middle. And how does that work? We have no understanding of that, of the nature of those
types of relationships. And he shifts midway after seeing that these patterns
hold, it doesn't matter if you look at Chevrolet or Ford, or wherever else he was looking, they're consistent patterns which seem to be more technology-based
than company-based. At that point he gives up on the original inductive approach. Remember, coming in with some sort of a theoretical framework. And just starts from zero. Here's an interesting phenomena that I want to start building
an answer to my question. All right. We can use this abductive approach to do all sorts of things. I mentioned earlier this exploitive model or exploitive form. How can you take some sort of phenomena that really begs explanation, and use an abductive logic
to provide the explanation? So let me show you this clip. I'm not gonna show you all of them. But Gail Whiteman, who is an
associate editor at AMD now, it's sort of the Steve
Barley kind of example. She goes to Guyana in South America to look at sustainability issues. I forget what exactly
she went there to study. But what she quickly came across as she was wandering around
the jungles of Guyana was that there's this
concept, what she calls corporate social irresponsibility. We all know what corporate
social responsibility is, but there's this interesting phenomenon that's she's picking up on,
that she puts this label on, corporate social irresponsibility. And how the hell can we explain that? And she starts collecting data and analyzing this back
in Europe, where she sits. And none of the theories
are really making sense. And she keeps on going through. Finally, she starts to see some patterns that may be emerging
with decoupling theory. And in the end, she actually
shows in the article how she goes through various theories and sees whether or not these theories might be consistent with the
data that she's collected. Ultimately, she comes to the conclusion that most likely what's going
on in this particular case, if I can get it, nope. Is decoupling. - [Narrator] When we shop for goods, we often seek assurances that the products we purchase are created
in sustainable ways. This desire for sustainability has created demand for
formal certifications to ensure that companies are
doing business responsibly. But what if socially
responsible certification actually masked socially
irresponsible behavior? By examining an international
logging company's operations in South America, researchers found that the company, the government of Guyana, and the international
forestry certifying body all enabled socially
irresponsible practices. While damaging consequences
may or may not be intentional, firms may be tempted to
use decoupling tactics when certification is at stake. In Guyana, these tactics
enable the logging company to gain legitimacy, but
often at the expense of local communities. In some instances, roads built for logging were used for illegal miners
who then exploited the land. In more serious cases, both illegal miners and employees of the logging company were accused of abusing
indigenous women and girls. Despite these issues, the logging company received certifications that enabled it to expand its markets. Certifications that may
have confused consumers looking for truly sustainable goods. If you think the lines between corporate social
responsibility and irresponsibility are clear-cut, then pull up a stump. Because new research suggests
they may be as tangled as the vines in a Guyanese jungle. (upbeat music) - So induction doesn't
really work in this kind, classic induction doesn't
really work in this case. Because you have to go in
with the right mindset, but you can't go in with the right mindset to understand a phenomenon that hasn't ever been labeled before. So she had to see this phenomenon, make sense of this phenomenon, before she could start coming up with some sort of explanation. And she found an existing theory that actually did the job for her. So far, I've been showing you
a lot of qualitative research. I want to show you one more, abduction in the context of experimental research. So Jeff Pfeffer was
looking at this question of reciprocity, and
looked at the literature and discrepant findings
about organizations who may promote
reciprocity on the one hand or on the other hand, actually deter reciprocity, and
trying to understand what factor could really explain when this happens, or in particular, whether organizations
do one versus the other. And no really solid theory to explain how organizations could actually detract from reciprocity. Designed a series of five experiments to narrow down a plausible explanation. Again, there's no confirmation here. You could see it's experiment, right? They have a hypothesis. They either confirm or disconfirm. But it's a series of experiments that are ruling out some explanations and leaving others open as a basis for down the road theorizing. Let me see if I can get it, here we go. - [Narrator] Odds are,
if one of your friends were to take you out to
dinner for your birthday and pay for your entire meal, you'd probably feel pretty special. But more importantly, when
the friend's own birthday came around a few months later, you'd likely feel obligated
to return the favor. But say an acquaintance at work offered to take you out to lunch? - [Coworker] Hey, man. - [Narrator] Since organizations are all about relationships, you'd feel just as obligated to treat that person to a meal in kind sometime
down the road, wouldn't you? Not according to Stanford researches Peter Bellamy and Jeffery Pfeffer. In five experimental studies,
they found that reciprocity, or the moral imperative to
cooperate with individuals who do something for us first tends to be much weaker in
organizational environments. Unlike personal or familial relationships, at work people are less
inclined to believe that those who do favors for them have done so for altruistic reasons. They're also more likely to calculate what added value they can
receive from the relationship before they can return a favor. - [Peter] These are all explanations. - [Narrator] If you
thought that friendship in the office was business
as usual, then think again. (upbeat music) - So each experiment that they conducted was to narrow it down. Could be this, no evidence of that, very similar approach to Dr. House. Dr. House really sets the
tone for what we do in AMD. You could ask, isn't this harking? We're looking at a body of data, we're looking at our findings, and then coming up with some theoretical, some direction for theory,
for down the road theorizing. And yes, this is hypothesizing
based on what we observe, but this is the basis of abduction. The difference between
harking and what we do is we're proposing what may be plausible. We're not making any claims
of certainty whatsoever. Completely upfront and transparent. Now, you could also argue
that the real concerns here about reproducibility or p-hacking, because a lot of this
could just be artifacts. So what we tend to do much more at AMD than in other journals is we require some degree of replication. Certain times, we can't
get that replication. So in those cases, we may ask people to start playing around with the data. Things that we would never ask them to do. To put in these control
variables and see what happens. Take out control variables
and see what happens. And show us all of this stuff. Look at the specific
subsets in their sample and see if it holds in those subsets. Just to see, I guess you could call these the robustness checks that
Jason would talking about, to give us a little bit
of additional information. But I would say probably
in 60% of the papers that are submitted to AMD, we ask for some sort of replication. And it could be as simple as going out and doing an Mturk study for replicating one element of what they're doing. None of this stuff is very new. I'm almost out of time, so I'm gonna skip. But if you look back in time
to how the Hawthorne studies were done or what was
the basis of Hawthorne, they were absolutely driven
by abductive reasoning. They went in to study one thing, they found a surprising result, and they had to try to explain it. And that's what they were doing. So there's a history of that, I talk about Festinger also usually when I give this presentation,
but I won't do that here. Differences, I've kind
of hinted at these things in the way we do abductive research from the traditional
hypothetical deductive model. In quantitative studies, we would never sample on the dependent variable. With abductive reasoning, we sample on the dependent variable. We're not trying to confirm. We're trying to gain understanding. We're trying to move towards
plausible explanations. We try to expose, we try to identify these interesting and systematic patterns wherever we can see them. But primarily what we're
interested in doing is ruling out the usual suspects. We're trying to consistently narrow the range of plausible explanation. There's a limit to how far
we can go in one article. Five studies, like what
Bellamy and Pfeffer did, is probably about as far as we can go without boring the reader to hell. But in five studies, we
can probably get ourselves 70% or 80% of the way towards some sort of an explanation, and certainly provide the foundation for,
let's say, an AMR paper which may lay out the general
principles of a model. Or even bigger, some new broad theory. We change our methods throughout. And this is part of the review process. We ask people to go back
and collect the data using different approaches. We ask people to take
different analytic approaches with their data that
they've already shown us. So the review process
tends to be rather long. When Andy Van de Van started the journal, the principle was, one
review, we make our decision. I switched that around. If anything, we're moving
in much more the opposite direction because with
abductive reasoning, it's not a matter of letting the author necessarily tell us what the findings are and then seeing whether that works or not. It's an exploration. And others have to give some insight to where that exploration may go and what else needs to be done, in order to give us a better basis for down the road theorizing. So we typically see requests
for additional data, and we can go two or three rounds now before we're conditionally
accepting papers. We like to conditionally
accept after one review, but that's no longer in the rule. We also do a lot of stuff
that, like I hinted at before, that you would never see
in a lot of other journals. We ask you to play with
the control variables. We ask you to put in controls. Not take them out, we ask
you to take them out as well, but we ask you to do a lot
of playing with the data in front of our eyes. So we can together see what
those patterns look like, because those patterns are indicative of some interesting avenues
for future exploration. We focus a lot on the comparisons that I talked about
before, those contrasts. That's why looking at
how the patterns display, looking at subsets within the data, are really, really important. That's a hint that there's
these contextual elements that may influence where
the relationships are going. And we put a heavy emphasis,
like I said before, on replication, to make
sure that we're not looking at artifacts. I'm gonna finish up with this slide. What do we think, what kinds of papers are ideal for this type of approach? The first one is to
surface new phenomenon. This is straight construct
validity research in a lot of cases, at
least from a psychological, those of us that do more
psychological research. And this isn't just how to come
up with a better mousetrap. Let's come up with a better
job satisfaction scale. That stuff we won't publish. That's more of a measurement issue and not necessarily a new
phenomenon, a surfacing issue. What we're interested in is identifying new constructs that are meaningful in terms of how we can
actually capture them. And I have actually one more slide, but I guess I won't show that. Jeff Edwards talks a
lot about how important construct validity research
is and how this is really deeply theoretical research,
but you won't see this in most of the mainstream journals. You may seem construct validity research that's part of a broader paper, but a straight construct validity study? Hard to find them now. We think this is a critical
aspect of abductive reasoning, a critical aspect of what we
should be doing in science, surfacing these new types of phenomenon, and publishing those as they are. Second one is looking at these
surprising relationships. Again, the emphasis is on surprising. It doesn't have to be so much novel as it is something new and different, I guess that is novel. And we don't necessarily require anyone to go out and explain
the nature of the relationship. Because in a lot of cases,
you find these relationships in studies that were never intended to study those relationships,
so you may not have the data to actually understand the mechanisms. So what we get as a
result is something called a stylized act. And in economics, stylized
acts get published in top tier journals. In management, stylized acts
get thrown in the garbage until you can actually
show the mechanisms. And I can't tell you how many
rejection letters I've gotten from publishers from top
tier journals saying, "Unless you show us the
mechanisms, we're not interested." This is throwing out the
baby with the bath water. These stylized acts are
critically important and we publish them as what
we call discoveries in brief. Short papers, research notes,
don't need to have mechanisms, if you can give us clues to
what they can be, that's great. If you do have the
mechanisms, even better. Or at least plausible mechanisms. Not at least. Tending towards plausible mechanisms. The last one is looking at
these discrepant findings, trying to find some clue
as to how we can understand when the relationship will go
one direction or the other. This is sort of our response, or actually, I think it was the board of
governors and the academy's response initially. I'm not sure they fully
understood what they wanted to do when they established AMD. I think they had an idea. I think Hollenbeck had a really good idea of where he wanted to go with this journal and what it was supposed to do, but it took five years of
sort of our own exploration to try to figure out how this journal fits in to what we do and
how critically important it is to reshaping the science
that we're all engaged in. And I think there are other journals, or science, I think is
going in this direction as well a little bit. Right? But I guess you'll talk- [Man] Strategy science. - Oh, you're strategy science, right. Sorry about that. But org science is going in
that direction a little bit. And there are other journals that I think are opening up to this approach as well. We like to see ourselves
as leading the way. Thanks. (audience claps) - We have about five minutes. - [Woman] Oh. - [Man] So, great presentation, thank you. - Thanks. - [Man] Made me think about connections with the previous presentation,
and my feeling is that- - We worked this out ahead of time. - [Man] Yeah? (audience laughs) So it seems to me that harking
is the outcome of abduction. That abduction is a very
common thing that's happening and when people write traditional papers. - Right. - [Man] Yeah. So is there anything wrong with harking? So if we have- - Except for the fact that
you won't really be able to get published very well or very easily if it's clear that
that's what you're doing. - [Man] So essentially, people
in the back of their mind, they have a number of theories
and they have some data. They see this data fits this theory and then they write hypothesis to say this is the theory
that explains this data. That's abduction. So over time as a field,
we will have more support for some theories
explaining some phenomenon. - I think you're going
a little bit further than where we typically go
with abductive reasoning. We tend, if you look at AMD papers, I don't think it's by chance
that in the discussion, you'll see implications for
down the road theorizing. You will not see hypotheses emerging. They'll talk about the
implications for theorizing, they'll talk about criteria
that theories need to meet in order to explain this phenomenon that are empirically based. By I don't think that
we've ever published, I know we've never published a paper that based on the data,
have specified hypotheses. Or even broad propositions. - [Man] But it seems that the difference then is one of presentation, if you- - No, it's one of how far you go. It's one of how far you go. So with harking, you're
going one step further and you're looking at
your findings and saying, "Okay. "I now predict what I found." And what we're saying
is that it's plausible that this relationship
is explained in this way or that various factors
have to be considered and cannot be excluded from consideration when building a theory to explain the relationship between X and Y. That's as far as we go. Jason. - Yeah, so just, that was good. Thank you. (audience laughs) I didn't mean to sound surprised, but. (Peter laughs) (audience laughs) - But yeah, that was really good. Just two quick questions. So asking for replication, what happens if they fail to replicate and
it suggests it's an artifact? And two, where does AMD
stand on a direct replication attempt, like if that is the goal, the primary purpose of this paper? Do you have to do this reasoning
for it to fit the mission? - We don't look, we don't
demand direct replication. It's great if we get it. We are happy to publish
direct replication. There aren't a lot of journals that say they publish direct replication. We are interested in
publishing direct replication. I think we've gotten one submission so far that was not accepted. (audience laughs) What happens if you go
out and try to replicate what your initial findings were? So far, we've had pretty good luck where we ask for it, we
get reasonable replication. In other words, again,
it's not exact replication, but we're looking for
evidence that it's not just some sort of artifact of the setting or of the measure or of the methodology. We need a little bit
more, we have to make sure that it's not just p-hacking. So at this point the bar is fairly low to be able to convince us
that it's not p-hacking. We're a new journal, so
we need to publish, right? (Peter laughs) - [Jason] Ready? - So I want to make a focus
point about presentation. So you showed us a couple of videos here. And these videos are supposedly created to create a mass message,
maybe to managers, to the broader audience. - [Peter] Yeah, and to students,
part of the idea is that- - To students, to somebody. But the message of the papers,
maybe the research process, the abduction process, those
are all twisted in the videos. You get to the core of the
idea as a social responsibility versus just the final certification. So you're going to
summarize the final message in a one minute video. An alternative way to think about it in a written format is to summarize the message of a paper
in this tidies finding in one sentence clear summary. And I think the debate that
some of us are having right now is whether that one sentence summary should be labeled a priori theorization and be framed that way at
the beginning of the paper or whether this should be a
finding at the end of the paper, regardless of how the research was doing. - Again, the answer is that if you come in at the
beginning of the paper and say, "This is our idea going into
this study and we found it." There's an ethical issue there. Because in fact, that wasn't the case. It emerged from the data. And you're not demonstrating
anything with certainty, because your study wasn't designed to test anything with certainty. I think it's being much
more straightforward and it's contributing much more to science if there is a higher element of doubt which is consistent with
the process by which you generated that finding
and you talk about it in a plausible context at the outcome. So the emphasis is a heavy
duty theoretical discussion at the back end. Our papers in AMD, with
abductive reasoning, are very similar to what you would see in medical science or in physics. The front end is short. You have to establish
what the question is, the necessity for examining this question. You've got to establish that prior theory doesn't really provide us
with a coherent explanation, a parsimonious explanation. We'd have to take, as Jason said, from three or four different theories as our students often do, to be able to justify each hypothesis,
and that's a mess. So if you can convince of us of that, then you have a basis
for asking the question and looking to the data for the answer. And then the theory
comes in the discussion. (woman speaks faintly) Okay, so thanks. (audience claps)