- Thanks again to everyone for coming. It's my pleasure and honor to introduce today's keynote speaker, my law school classmate,
Professor Dana Remus of the University of North
Carolina School of Law. Most recently she served as senior counsel and special assistant to the President in the Office of White House Counsel where she led the White House
Ethics and Compliance team and advised White House
staff on all aspects of government ethics and compliance. She earned her J.D. from Yale, clerked for U.S. Supreme Court Associate
Justice Samuel Alito. Her research focuses on
legal and judicial ethics and the regulation of the legal profession and she has particular
expertise in the intersection of emerging technologies
and the practice of law. Her publications include
Can Robots Be Lawyers and The Uncertain Promise
of Predictive Coding where she provides
nuanced, detailed accounts of the technological
displacement of lawyers and of the ethical implications that arise when lawyers use machine
learning technologies. She's eminently versed
in the promise and perils that new technologies hold for the law. And I'm so pleased that
she can join us today. Thank you very much. (audience applauding)
- Thank you. I feel the need to
clarify that I served in White House Counsel's Office
in the Obama administration. (laughing)
(audience laughing) Thank you so much for the introduction. Thank you also Simon and Leah,
who I don't think is here, for putting this together,
both for envisioning it and getting us all here. It's a really fantastic day so far. And I'm so grateful to be here. I will say that this is
quite an intimidating crowd to deliver a keynote address to. So I will do my best to add
something to the discussion, but I can't make any promises
given how interesting and insightful the morning has been. I think it's very exciting
and a positive development that legal technologies and
the impact of technology on the legal profession is getting so much attention these days. But given that various technologies from the internet to email to Westlaw to Lexis to fax machines
have been impacting the practice of law for years now, it leads one to wonder why
there's the bigger hype now or if there is a bigger hype now. To the extent that there is, I think it's for three principle reasons. Albert introduced us to
one of them this morning, forces unrelated to technology that are increasing pressures on
lawyers to reduce fees. Historically both lawyers and law firms were resistant and fairly successfully so, to adopting new technology
so long as clients were willing to pay on the
basis of the billable hour, there was no imperative to use technology to increase efficiency. But in recent years, even before 2008, but particularly following 2008, client pressures on lawyers
to reduce both hourly fees and total hours spent have been intense and technology offers potential
and promising solution, a tool to reduce costs. Second, at the bottom of the market, the access to justice
problem is greater than ever. And again technologies offer
a seemingly promising solution and one that a number of
internet entrepreneurs are enthusiastically
embracing and developing. And third, artificial intelligence and machine learning applications
appear able to perform a far greater scope of legal
tasks than former technologies, former previous technologies. Previous technologies principally replaced clerical and supporting staff, but artificial intelligence implications threaten to displace lawyers themselves, something that lawyers are
not necessarily happy with. So, there is some reason, I think, for greater hype and excitement about legal technologies now than before. But I think the danger
of such hype is assuming either that technology is a silver bullet that's gonna solve everything or thinking that its
trajectory is predetermined and we just need to follow it and it's gonna lead us to somewhere good. So, I wanna spend my time today raising some questions that I have and things that I think
we should be thinking of. Many our speakers this
morning have touched on and I will reiterate them
and some are a bit different. I'll do so by talking about
three principal topics. The first two are primarily in dialogue with our first panel this morning and the third is in
dialogue with the second. What legal technologies can and can't do or my understanding of
what they can and can't do. Second, how that's actually impacting the demand for lawyer labor. And then finally, how the
profession's regulatory structures are interacting with
technology's trajectory. So first, what computers can and can't do in the legal field. With apologies to those in
the room that know much more about computer science than I do, I will start with a foundational
notion of computer science which is that computers execute rules. To automate a lawyering task it therefore needs to be possible to articulate that task in a
set of rules or instructions. Computer programs have
long been automating tasks that we can describe as a set
of step by step instructions or what I would call deductive
rules for the computer. Sherry's MyLawBC example
of the will template, automated will template, is an example of this type of programming. To boil it down to a simple example, in part that program is
designed to ask the user, "Do you have children," at one stage in the structure dialogue. If the user says yes,
the will will include a provision for appointing
guardians, least I'm assuming so. If no, it won't. This is a very clearly articulated set of step by step rules. This is the same type of
programming that's used for a computer program that searches an online legal database of cases. For example, a case coming
out of a particular court, citing a particular statute. The computer program looks at each case and says, is this case
from the court in question, if no, pass it over, if yes, does it cite the statute in question, if no, pass it over, if
yes, list it in the results. So that's the first set of, or first type of computer programming
that's been around for awhile, there are certainly constantly
developments or improvements. Where the structure of
information processing is not that apparent,
where we can't articulate step by step how you get
from point A to point B, we may still be able to
model a task for a computer using data driven rules. And I think Ben and Albert's
program of Blue J that we saw this morning is based on
this type of programming. So, if you are predicting
how a court might rule in a particular course of
action and you're a lawyer, you're gonna know the facts of the case, the elements of the cause of action, you might do some research
about how previous cases have come out, how the
court that you are before has ruled in previous cases. You probably can't articulate
as step by step instructions how you get from all of that
information to your prediction. So we can't program a computer based on that first type of programming. However, as they've shown and
as other programs have shown, if we give the computer enough
data on that type of case and how courts have ruled
in that particular case, the computer might be able to develop or spot a pattern based on
which it develops an algorithm which allows it to project
forward a prediction. Predictive coding
software which has gotten a lot of attention,
automated document review, also proceeds on this type of programming. The computer is given
a seed set of documents that a lawyer has coded
as relevant or not. Based on that seed set, the
computer develops an algorithm for relevancy which once refined
can be projected forward. Now one of the really interesting things about these and other
data driven programs, and surely one of the reasons that they're getting so much attention, is that they show that
many lawyering tasks may be more routine than we ever thought. The lawyer doesn't experience
her thought processes in predicting how a court
is gonna rule in a case or determining the relevancy
of a particular document as routine because so much of it is based on tacit knowledge and opaque
information processing. But sometimes a computer can
make that processing explicit and in doing so shows
that some of these tasks have a much more structured
and routine nature than we previously thought. That said, there are certainly
a number of limitations to the ability to automate legal work. And so here I wanna
introduce a note of caution. Most importantly the task at hand has to have underlying,
even if hidden, structure. If the judge makes
different decisions in cases with the same case characteristics, if there's no structure
to the decision making, there's no computer model
that's gonna be able to account for all of that data and make reliable
predictions going forward. If the judge encounters a new situation that's not accounted for in the data on which the computer is trained, again, the prediction may or may not be reliable. So as a simplistic example,
if all the past cases dealt, had male plaintiffs,
who knows if the computer is gonna be accurate predicting the result in a case with a female plaintiff. One would hope it would, one would hope that wouldn't matter,
but we don't really know. This is a key point to which I'll return that computers are very bad
at dealing with contingencies that lie outside of the data
on which they were trained. One category that remains
exceedingly challenging to automate, precisely
because it is so unstructured, is human interaction. Natural language processing
has certainly come a long way in recent years,
but it has a long way to go and it's still based on linguistic
features and not meaning. Meanwhile effective computing
or dealing with emotions has made impressive
advances in things like measuring physiological responses or looking at facial responses, but this leads to conclusions like user is frustrated or user is happy. Which again is really impressive, but it doesn't come close to navigating the diverse array and infinite
number of emotional states that we certainly can't
articulate ourselves and yet we navigate all the time. And I raise this because I think that in unstructured human interaction is a key part of lawyering,
that maybe it will not be a key part forever, but it
certainly is at the time being. So with that as background,
I'll turn now to how advances in both types of computer models, those based on deductive instructions and those based on data driven rules are impacting the market for lawyer labor. In some areas the displacement of lawyers has been or soon will be significant supporting some of the headlines. But in others, I think
it's highly unlikely that lawyers will be
replaced any time soon precisely because the underlying work is insufficiently structured
to be automated at this time. Interestingly I think that the fault lines aren't necessarily obvious or intuitive. Tasks that on the surface
look fairly similar to us actually pose drastically
different challenges in the task of automation. So consider document drafting
as opposed to legal writing. I define document drafting as producing standard legal forms like
contracts or wills or trusts that express as clearly and
unambiguously as possible the intent or agreement of the parties. The task has been successfully
automated for some time precisely because it is structured. Specific terms and
provisions certainly differ, but the overall organization
and content of these documents are relatively consistent
across instances. This is why lawyers have long
been using standard templates or forms in starting
the process of drafting. Yes, they need to be changed or altered but it is effective to
start with a basic form. Legal writing, which I
define as the production of written product that
either characterizes the state of the law or its application to particular factual circumstances, presents a very different situation. Certain aspects of say a legal brief are certainly consistent
across individual instances, like the introductory
or concluding material or the statement and explanation
of the standard of review. But much of the meat of the legal brief entails and requires conceptual
creativity and flexibility that is beyond the current
scope of computers. So the legal analysis section entails a complex interplay of law and fact where the facts in question
dictate the relevant law, but then the law tells us which facts are particularly relevant. The use of precedent which
becomes second nature for a lawyer is exceedingly
difficult to automate because the same case can be used to support opposing positions. Making an argument for one
as opposed to the other requires differentiating
between the binding holding and the persuasive dicta. It also requires placing that one case in a line of precedent. These are things that are very,
very difficult to automate. I think that people tend to
be a bit too optimistic about the trajectory of
automation of legal writing based on automated press stories that automate sports writing or even ROSS's production
of simple legal memos. But I think the comparison
to sports writing is inapt because describing what
happened in a baseball game is at base a fairly structured task. The game can be reconstructed based on the play by play game feed. As for ROSS, it's impressive
what they are doing, but at this point the legal
memos are simple explanations of the state of a law in
a particular circumstance. At this point it still
requires a lawyer to review what the program has spit out. The lawyer revises, adds
and then puts it out. So it's an example of
computers know questions helping lawyers making
lawyers more efficient. But it is not automating
the legal writing process at this point. Another useful comparison, I think, is between document review
in discovery practice, which has been successfully automated, and document review in due
diligence, which has not. In discovery practice,
the goal is to identify documents that are relevant to a preset list of topics and questions. Having the topics and questions in advance makes it a structured task. We can program the computer
to look for documents with a single set of linguistic features all responsive to those
questions and topics. In due diligence, the
very goal is to look for surprising or unexpected things, which at base is an unstructured task. It is exceedingly difficult
to program a computer to look for things you are not expecting. So, a lawyer, and I am
here idealizing a lawyer, I realize there are
problems with comparisons between automated document
review and human lawyers because human lawyers tend to get bored and not pay nearly as
much attention always as they perhaps should in document review. But our ideal lawyer
would hopefully notice a particular contractual
reference that's in violation of say the Foreign Corrupt Practices Act as highly problematic and note it. Unless there is something
in the trading data to tell the lawyer to look for that, the computer will miss it. So there's significant
variation in the extent of the inroads that computers are making on lawyer labor in different areas based primarily on the underlying work and whether they are structured or not. One of the really interesting
and, to me, surprising things that my co-author and I learned
in a recent study we did is that contrary to conventional wisdom where computers are making inroads is not directly correlated
to who within a law firm typically performs that work. The standard story was that computers are eating lawyers'
jobs from the bottom up. So if we remember Albert's pyramid this morning of the typical law firm, they were starting at
the base and going up. And what we discovered is that the pattern is not nearly that neat. Certainly some, there are
some points that support that, like document review. In discovery practice in
large cases it is widely used, it is very effective and
that is primarily performed by junior associates or
now contract attorneys. But other tasks don't neatly map that. Legal writing, which as I mentioned, is still very difficult to automate, which is still primarily
performed by human lawyers is typically performed
in the first instance by junior associates who
are either writing memos to inform their partners
of the state of the law in a particular field or
writing the first drafts of briefs that their
partners will then edit. And I think probably most problematic to the direct correlation
between what's being automated and who within a firm performs it, is the difficulty that I already noted of automating unstructured
human interaction. At least at the present time, that permeates lawyering at every level. Giving a client sophisticated advice is typically performed by partners. Investigating a client, doing basic intake may more often be done
by a young associate. But both require human interaction and both, at this point,
are resistant to automation. Now there's an important
category of computer advance that was referenced this morning that I haven't yet discussed
and that is programs that don't just set out to aid lawyers or make lawyers more
efficient, but that reenvision the underlying tasks to
obviate the need for lawyers. Expert systems are, I think,
the ideal example of this. Expert systems as we
saw take a particular, generally fairly narrow legal task and structure it or present it as a structured dialogue with the user. Once the system is designed,
it can be scaled to many users at a cost that is far less
than if that legal advice was delivered to each
individual one by one. DoNotPay that we saw
is an example of this. Blue J's home office class
I thought was gonna be an example of it until I realized that a lawyer is using
it, but to the extent that it might some day be
directly marketed to clients, it would be an example. Another example of this general
category of reenvisioning the task itself is online
dispute resolution systems which negotiate disputes,
negotiate resolutions to disputes, generally in the ecommerce
world between two users without the involvement of a lawyer. I think that in the future, this category will be significant
and if we're intentional, it can be very significant in addressing some access to justice needs. But I don't think it's
having a significant impact on the demand for lawyer labor right now. Primarily because it's
addressing situations where the individuals would not otherwise be going to a lawyer. Those people with parking
tickets would not otherwise hire a lawyer to overturn
their parking ticket. Most of the ecommerce disputes for which online dispute resolution
is used are low stakes and would not justify an
individual hiring a lawyer. And so I come to my less
than exciting conclusion that computers are impacting
the demand for lawyer labor and making impressive
inroads on legal work, but that they're not doing
so nearly to the extent of some of the headlines
we've seen in recent years which predict the end
of the legal profession and claim that robots will have taken all lawyers' jobs within the decade. Of course, the pace and
trajectory of technologies and of their impact on the
demand for lawyer labor won't develop in a vacuum. The market for legal
services is highly regulated and the profession's regulatory structures have and unless there's
drastic changes will continue to have a significant
impact on that trajectory. Critics of the profession contend that its regulatory structures and
ethical rules are all bad. Self serving tools of protectionism. I think and I wanna start
out by acknowledging that there is no question
we can all find examples and situations in which that is true. I also want to say that I completely agree with the critics who contend
that the profession's principal mode of regulating
new technologies at this point, the unauthorized practice of
law rules, it's ineffective. However, I don't follow or I don't agree that we should jump
from the conclusion that existing approaches are
ineffective, maybe even harmful, to all approaches are
ineffective and harmful. To the contrary, I think
that key rationales or functions of professional regulation, namely protecting consumers, ensuring access to legal services
and quality legal services by all segments of the population, and protecting the basic
integrity of the legal system are implicated by new technologies suggesting that we should regulate better, not that we should not regulate at all. So starting with consumer protection. It is certainly the case
that in some situations computers and automated legal services beneficially eliminate human error. And may actually increase
consumer protection. But as Frank argued, I
think persuasively before, that's not always the case and in fact, it's very dangerous to assume that that's even often the case. I've already alluded to
the problems that stem from computer's inability to deal well with unanticipated contingencies, and I think that that's
a really useful lens through which to think about this question of when do computers do as good a job, maybe even better, a better
job, than human lawyers. And when are the risks actually
quite high of a mistake. LegalZoom which is
convenient and effective in so many situations has had trouble dealing with individuals who have exceedingly complex tax situations. Now that would not be a problem
if the computer's response to those individuals consisted of, you have a complicated situation,
you should see a lawyer. Instead in both instances
that I'm aware of, the program went ahead, produced
the will or the contract without regard to those tax situations and then it only came to light far later when the individual
had incurred liability. Returning to predictive coding, which I like using as an example because I think it's really one of the
success stories of automation and even there there are cautionary notes. Many predictive coding
tools are ineffective at spotting hot documents in a case. The silver bullet that
wins or loses a case. And the reason I think
is fairly interesting, turns out that we as human beings often change our tone of
voice or the language we use and become excessively
vague or formalistic or just resort to full on
code when we're acquiring legal liability or making a
decision we're nervous about or think might be wrong. Which means that the
language used in an email or a document that's very relevant to intent and decision making in a case, might not be spotted by the computer because it wasn't prepared
for it in the training data. So if there's an email
on a key date in a case that just says, "All set it's done," you'd hope the lawyer would say, "Huh, that might be relevant." Unless there's something about those words in the training data the
computer would not spot it. Now I should say that that was
accurate as of 18 months ago. Many predictive coding
programs have now addressed this problem by expanding kind of the baseline of information on which they're trained beyond
documents in a case. So it's a good example that
like technology in all fields we can't, it will inevitably improve and we can guide that improvement but we very much have to
be cognizant of the risks it's creating and very
intentional about addressing them. Now I wanna back up and say,
let's go back to assuming we haven't addressed any of the problems and we will just take as fact
that certain technologies do not perform as well as a human lawyer. My argument is not that the
trade off of lower quality for lower prices is never worth it. I think in some situations
it likely is worth it. I think for many individuals
who just need a simple will, a computer program that
provides that simple will is all that they need. They do not need the additional lawyering that would come from a lawyer. They don't need the lawyer
to spot greater complexity and make novel, creative arguments. So just because we wouldn't
say that automated will program is equal to a lawyer, that
doesn't necessarily mean that automated will program
isn't a very good thing. However, the trade off is gonna be very different in different situations. If we're talking about a child
custody dispute or asylum, I think the trade off
is very, very different. There we want the lawyer. So the questions that then arise that I think are absolutely
critical to be addressing, but that are very hard, entail
who makes those decisions as to when the trade off is appropriate and where it's appropriate
and what factors should they be considering
in making the decisions. I think that particularly in
the individual services space, well, I shouldn't say that,
the corporate space too, I think overall, there's
a lot of enthusiasm for the notion that
clients should be making these decisions for themselves. And there's a lot of
persuasiveness to that argument, there's a lot to be said
for client autonomy. But I wanna introduce
some notes of caution here and note a few reasons that
I think we should think hard before just saying this is a decision for clients to make with
respect to themselves. The first and most straightforward reason the fundamental justification
for organizing law as a profession is the complex and esoteric nature of legal expertise. Now there's no question
that this justification is overplayed sometimes and
used to protectionist ends. So it's hard to resort to it. But it's also, no questions,
the case that individuals who have not been trained in the law are not in a particularly good situation to determine when a
legal program can protect their legal rights and when it can't. Second, as we've just been talking about is the access to justice issue. It's a key obligation
of the legal profession and I don't think that we can, and I think our whole discussion
just supported this notion, we can't solve the
access to justice problem by redefining it as access
to computerized services whether they're effective or not. That does not mean that
technology should not and can not be a part of the solution to
the access to justice gap. And I loved seeing Sherry's examples and it was really helpful
to me to have a visual of some of them to think through, okay, how can technology
be used to increase access without just seeing it as a cure all. I think using it, and lots of
legal services, organizations in the states do this,
and I think here too, using it to make the intake process more efficient is very smart. I also think that there
are creative solutions by some courts which are
offering hybrid legal services to pro se clients, combining
computerized intake with human assistance. I think that is interesting
and encouraging. And then there was a reference
to what law schools can do and there are a number of law schools, as I hope Dan will talk
about this afternoon, he's at the forefront of a lot of this, who are developing various
programs that get students involved in either
actually programming apps or using apps in ways
that make legal services more accessible while
also addressing the fact that there's a point at which the person needs to see a lawyer. And how do we ensure that the
technology builds that in. Okay, a final set of
issues that I'll mention are of a different sort and they come back to your guy's discussion this
morning that was fascinating. And they're specific to
the data driven programs. We started to talk about this this morning and I'll just expand on it a little bit because I think it's really interesting and important to think about. Like big data generally,
my understanding is that these programs give a user an outcome without a detailed explanation of-- Say that again. - [Audience Member] It
does give an explanation. - Of the combination of factors
that produced that outcome? Okay, so.
- Families with methods and they work in different ways. There are black-box methods and ones that weren't black-box methods. - Okay, so that's, I guess then this, yes, fits into my kind of
argument in this space which is we can use these
programs to great ends, like you suggested, the idea of having a race neutral predictive algorithm for sentencing suggestions that
can then be used to compare to actual sentences in cases which then can be used
to highlight human bias. But I do at the same time
worry that if we're not intentional about using them in that way, there's a danger that I just
wanna flesh out a little bit which is that discriminatory patterns that we don't recognize right now get embedded in a way we don't recognize. So what I'm thinking of is
an algorithm that discovers a weak correlation between
the race and ethnicity of litigants in a particular
court and the court's outcome. If the algorithm notices that, it's gonna factor that
into its predictions, but if it's a very weak correlation, it could factor them in in a way that doesn't immediately highlight us, highlight the results to us in a way that allows for accountability
and yet it's baked in there and then those predictions impact how litigants or potential litigants act in the shadow of those predictions. So that's my danger. I was gonna come around to this notion that we can use the
technology in a positive way to counter that danger
and even to counter, to go above and beyond
countering the danger of bias in the technology and counter bias that occurs right now in human decisions, but that it's gotta be
an intentional thing and that it's gonna be an expensive thing. And I'll stop here that this
just folds into in my mind another argument of why
we do need the profession to be engaged and we do need
regulation in this sphere because I think without it,
the market will just push these technologies in a way that pushes the reasons aside and just
pushes towards outcomes. So, I will stop there. I'm very anxious for your
thoughts and questions. (audience applauding) (audience laughing) - [Audience Member] First,
thank you very much. I think this was very illuminating and as you may have guessed, I totally agree that it's very important that the lawyers themselves
get their fingers behind what's happening inside these systems. Small methodological point,
at some point you said that if there is no structure in the data that is being researched, then the system cannot find structure. There is a wonderful example
by one of Google's AI systems, I think it's called Dream Technique, but it's something with Dream, that I think clarifies
that that is not a fact. So, maybe it's not what you meant, but I still think it's very
important to make the point. So they showed a program
animal faces, nothing else. They trained algorithms unsupervised. That means you give
the algorithms the data and actually you tell the
algorithms, go look for patterns. That means you develop a hypothesis pace with mathematical functions that you feed and then the algorithms
is going to find patterns because that's what its job. It will always find patterns. After you've trained these
algorithms on the animal faces, they showed plants. And we are not surprised, the algorithms, the machines saw animal faces. And actually there's a very nice online, you can easily find it. Now is this surprising? No, of course not. If you train something on animal faces, it will see animal faces everywhere. What this should remind us if
the things much more complex and not so obvious, that
patterns are going to be found. And this is based on
mathematics not on reasoning. And this will have enormous implication for all sorts of outputs
that are going to be given and you can always translate them and say, well, it's these factors
that gave this outcome. But they might be absolute nonsense. Is very important that
we have people versed in software verification that
can begin this conversation. And that lawyers begin
the same conversation. - Thanks, super helpful and interesting and leads me to a few different thoughts. First, I misspoke if I
said a computer can't find, can't help if there's no
structure in the data. I meant no structure in the task. If the task is performed in
completely unpredictable ways. So I stand corrected there, I misspoke. I do think it's fascinating
and this is a good revision on kind of how I was presenting this. That if we give a computer data, it likely will find a pattern. The problem is it might not be the pattern we want it to find if
we're not intentional. It might find an animal pattern in a completely different
situation instead of, and predict a result that is
not at all what we intended. So that brings me to the next point which you kind of brought
home that I just think is incredibly important to focus on, which is that we talk about
artificial intelligence and machine learning as if computers have a life of their own. And we may get there, but at this point, we have to train the computers to do what we want them to do. Such that they are
limited by their training. I think it's a useful cautionary note to comparisons between
Jeopardy and legal services. I get this question a lot. If a computer won Jeopardy, why isn't ROSS gonna give
answers to every legal question that's out there in no time at all? Isn't it the same thing? It's answering questions. It's much harder to train a computer to give an answer to a legal situation, a legal question that applies law to facts than it is to return a factual answer to a question of the type
that Jeopardy usually gives because computers cannot
summarize passages. They can't summarize different paragraphs or different cases. They can return relevant chunks of text that are very responsive, but they can't bring it
together in an answer. So question and answering systems that are based on IBM Watson
require a human lawyer to link up inquiries with
responsive text passages. Once that's done, it's really
cool and really effective. But it takes a lot of
front end lawyer labor. And it means that economically
it's gonna be hard to get IBM Watson to a place where it can effectively respond in a
whole number of areas of law. The last point I cannot
kind of echo enough that we need lawyers who,
oh, I shouldn't even say we need lawyers, that's me bias starting, we need people who have
both legal expertise and computer science expertise. And we need groups of people that bring that expertise together. I, one of the things that was so exciting and fruitful and helpful
to me about my last project was teaming up with someone who has deep background
in computer science. That's not as good as
if one of us had both, but I think bringing
the two fields together and it's happening more and
more, and it's really exciting, has to be kind of the first
step in the path ahead. - [Audience Member] Thank you. Just wanna go back to your
point about predictive coding. And I'm wondering what you think we can do about mitigating dangers
of machine learning with so many closed data sets. And what I'm thinking
of is Mark Zuckerberg has written about how
he knows he's gonna have a better AI product than Google
because Facebook Messenger's a much more natural set of human speech than search queries are. If these data sets remain closed, how can we mitigate some
of concerns you've raised? - Okay, I think it's a
fascinating question. I don't even, I'm embarrassed to admit, but I'll be very candid, I don't understand exactly
what you're asking. So can you flesh it out
for me a little bit? - [Audience Member] Sure, certainly. So if these datasets that we require, like the corpus of knowledge
to feed these legal tools are owned by companies and are not shared, how can we mitigate these
concerns without open data? - Oh, yeah. If I knew the answer, I'd be really famous and or really rich. I don't know the answer,
but I will say that I think it's a very important
question to be asking. And it ties into the whole question of will legal technologies
actually make legal services less expensive and if so,
exactly how that's gonna happen. I think the story is much
more complicated than, than the message often is. The message is often, this
is gonna bring down costs. And it's because the comparison, and you know the studies in
predictive coding follow this. It's okay, here's a set
of documents to review. If you do it with a computer,
how many hours does it take? If you do it with a human,
how many hours does it take? It certainly brings down
the costs of human labor devoted to those documents. But there's a whole number of reasons to think there might be
increased costs elsewhere. Some companies are patenting
their legal technologies. Offering them free for an initial period and then licensing fees kick in. And this was raised this morning, there can be this battle of the experts or escalating fees if both
sides have the technologies and it's just creating
higher and higher costs. That's another one in predictive coding I've thought some about
that you have to have individual lawyers who are making the decisions of which protocols to use, which are appropriate for which datasets. And often going into court to defend that. Then you're gonna have experts
that you're paying for. So that's a long winded kind of expansion of your point that there are costs here. Other than a lot of
professional attention to it and public attention to put pressure on datasets being closed,
I don't know the answer. But thank you for the question. - [Audience Member] Just to
go back to your example of LegalZoom in certain
situations giving poor advice. I guess for me it seemed like
the relevant comparison isn't LegalZoom versus some person who's always going to give perfect advice. It's a comparison to
what sort of advice that people are generally gonna
get on tax issues or whatever. Because a lot of lawyers
are gonna give poor advice in some situations as well. So I'm not super well
versed on the way that most legal regulatory
associations respond to lawyers who give poor advice, but I suspect it's--
- They don't. - Oh, okay.
- I mean, they aspire to. They are chronically
underfunded and under resourced so for purposes of your
question I think it's fair to say we shouldn't be relying on them. - [Audience Member] Okay, so my question was going to be why can't we,
why doesn't it make sense to apply a sort of similar
approach to technologies that offer legal services or legal advice so if they give bad advice
then they're sanctioned or no longer allowed to do that. Why do you need an overarching theory for what sorts of tasks
or areas of the law that technology wouldn't
be able to assist with? Why can't it be a sort of responsive case specific sort of thing? - Okay, that is an excellent question. Let me start by saying that this talk is very much a product of my mindset which is in the paper I just finished which was thinking through, okay, where is automation doing a
good job and where is it not. I think the underlying
theme of your question which I take to be, why
are we saying A or B, why aren't we thinking about, okay, how to use these
technologies effectively whether they can replace
the lawyer or not? I am 100% with you. I think that's how we should be thinking. You also raise a really important question that I get myself in trouble for a lot and have to constantly remind myself. What is the baseline? I fall into a comparison
of computerized services versus what I want lawyers to be. And that is, creates the
danger of the perfect interferes with the good or the better. So I think you're absolutely right to note that we shouldn't assume
that lawyers are perfect. Now, my counterargument to that
is we also shouldn't assume that not so great lawyer or
technology are the only options. We shouldn't think that we
can't improve the status quo in other ways, which I
think just factors back into your notion of let's
just think creatively about how to use these going forward. So as for the consumer
protection response, I've been thinking more
and more that we need regulatory bodies for
various legal technologies that involve techies and involve lawyers. So I don't think it has to be purely professional regulation, I think there needs to be some regulation. Right now, and this goes
back to the liability issue, there's just not very
much regulation at all of a bunch of the online service providers that are not ostensibly
providing legal services. One last thing that is not
entirely necessary or responsive, but I can't help but noting,
the interesting things about the two LegalZoom categories
are they were problems, they were individuals with sufficiently complex tax situations
that I think a lawyer, well, actually I won't comment on that. But they were unusual. They weren't your standard LegalZoom user. - [Audience Member] Going
on the previous question, the regulation is one aspect. There is another aspect that is insurance. I mean lawyers have malpractice insurance. And what we see in other
areas of automation this happens, say for example Tesla, offer now insurance for
their self-driving cars as part of what they do. I mean the insurance market
for any type of a program if probable either it can
be obtained over time, I mean that once there are
datas that are measurable they are insurable general speaking. Would it then put a level playing field if a (mumbles) somebody is not providing a good quality decisions
in automated fashion, their insurance would be very expensive. And somebody provide good, it will be cheap and the same as a human. So it will put a economical
level playing field if there are similar
insurance requirements across this and (mumbles) prices it would depend from the quality? - So, yes, that could be an answer. I think politically it's
gonna be very difficult. I mean it would only work
if these service providers, if it was required that
they have insurance, right? I think politically
that's, I can't envision us getting there anytime soon. If that happened, it
would certainly respond to some categories of harm,
obviously financial harm. Without having thought about it too much, I don't think it's a complete answer because there are lots of legal harms that can't be purely
compensated in financial terms. And one broader category
of worries that I have that's not sparked just by this question, but by whenever we get into the realm of two tiered services and
address the top of the market and the bottom of the market differently, and within each realm
just make sure individuals are being protected and getting services. It raises the question of
how the two spheres interact. And most problematically
raised by your question, how there's then, how the
law evolves over time. That how, and I'm getting as I speak, I realize I'm getting a little
far afield of your question, but while I'm on this theme
I'm gonna roll with it. If at the bottom of
the market we just have computerized legal services even if the computerized legal services
have effective responses to harm in individual situations. And at the top of the market
you've got fancy lawyers with legal services able to make arguments for legal change over time, there's really gonna be
an over representation of corporate interests and
an under representation of poorer interests in
law reform over time. So I'm always a little
resistant to any solution that's just address the
problems in one sphere without thinking about what's
happening in the other sphere. - [Audience Member] Hi. I really like the talk and
I was sort of thinking about the typical language of
automation in job loss, it often centers around
performance to price ratios. And price is rather agreed upon concept. We kind of know how much things cost. It's the performance side that
actually is rather contested and is bit of a moving target. And we have to ask ourselves,
well, what do we mean by legal performance, what
is good legal performance or what is average legal
performance or whatever. What is satisfactory legal performance. And what I think is
interesting is it's easy to think of that as fixed,
but it also changes. And I think you made references to this when you talked about the
structured nature of the law. So, we could imagine that the gap between say typical lawyer performance
and machine performance would be narrowed if
we simply made the law full of rules as opposed
to standards, right? And we have some choice there,
that's a legislative choice. And if it's a legislative choice it's also a political choice. And it doesn't even have to
be on the legislative end even if we think about adjudication, there's politicized choice with regard to the structure of law there too. So for example, if we want
to be naive textualists and we just declare that
the way to perform legally is to do a naive textualist approach, the technology already exists to do that rather reliably, right? But if we wanted to be
living consitutionalists, I mean, we may be talking
about waiting until pragmatics gets involved into natural
language processing which might be 2100. So you could see how people interestingly on the rules side of the spectrum who tend to be somewhat conservative, people on the left who tend to be more into the pragmatics of law, that this could become a political debate. That could shift basically the goalposts and shift what we decide
adequate legal performance is. Do you have any thoughts on that? - I think that's fascinating. I feel like there's lots that could and should be written on that. The two buckets of
thought that come to mind. First, it reminds me of
folks who fairly are saying, you lawyers have been saying your services are so special and bespoke and individualized in
different circumstances. And there really needs to
be benchmarks to measure legal standards and legal success. And once you have those, we can figure out where the computer performs, figure out if the cost
differential is worth it. I think everything you're
saying is a really effective, there's something to what they're saying, no questions asked,
there's something to it. But I think you introduce
both important complexity and an important pushback
as soon as we kind of say, okay, let's standardize everything, we are fundamentally changing the nature of the legal system. The other thing that it makes me think of is even when we're ostensibly say, looking at performance measures. Like does the computer perform
to the level of a lawyer. We are looking at the
outcome, the end result, and not thinking through
the fact that the computer gets there a very different
way than the lawyer gets there. And that very different way
doesn't in the way I'm talking effect this specific outcome, but if we do that more and more, it certainly effects how the
legal system is functioning. And this kind of ties into
this whole bucket of questions that we've been talking
about with legal prediction, so it seems like you are
kind of coming at that from the other side saying
if our goal is to automate, what does that mean that we
have to do to the legal system. And I like it because
it's a different angle and window onto how computers
perform differently. And it lets us look at the end results even though we're looking at them as a way to get to the automation. I think it's super interesting. - [Audience Member] Oh, thanks. I mean just a wonderful talk
and I want to make two points. One is that I do think just
to build on your last point that in many situations
legal reasoning by persons for persons is constitutive
of the legal situation, it is not merely one way of
addressing the legal situation. And that may seem circular,
tautological, et cetera. But I would also say that
if you have any reservations about killer robots
deciding whom to execute, you might want to apply
some of those reservations I think in these other scenarios. The second point I guess I would make to get more to the
pragmatic next steps is, I know that North Carolina
had mooted a rule, I don't know if it got passed,
that restricted the ability of software providers to impose these one sided terms of service
that would disclaim all liability for bad legal advice. Are there other examples
of where state bars, other regulators should go? I really liked your idea of like an FDA for legal technology. I think that's great. Any other practical steps we could take? - So I kind of can't agree
more on the whole notion of lawyers and lawyers'
interactions with clients being kind of constitutive of the state which puts lawyers in a different category than other service providers. And I think that's important
to keep in mind, so yes. On practical steps, I'm not
always so good on the practical. (laughing) But I would go back to, I
don't have specific answers. I am and this is kind of
I know kind of dangerous what every academic says. But I do think that whether
it's through the state bar or the state court system
or the state government thinking that the lawyers
are taking too long, I think it's critical for
states to have commissions that join, however they do
it, join legal expertise and computer expertise. And think both about how
that can be addressing access to the courts,
access to justice generally, and think through the dangers
that can come from that. And think through regulations. So basically I just think
that as a process point, we need groups that are
really focused on this. - [Host] Great, so I think
that's a great place to finish. Thank you again so much. - Thank you. (audience applauding)