HARRY SURDEN: So
thank you, guys. I'm the person who was
over there a moment ago. I'll be moderating this. Unfortunately, or fortunately,
you'll be seeing a lot of me because I am moderating
the first and third panels. But we are absolutely delighted
to have a terrific expert panel here talking about
the use of large language models in law. Our panelists are
extremely accomplished. In the interest of
time, I will only give a very short
biography of our panelists, but I encourage you to look
up their very extensive accomplishments online. Let me start with
Pablo Arredondo. He's Co-founder and Chief
Innovation Officer at Casetext, which is one of the first
companies using a product called Co-Counsel, which
actually is a version of what I was suggesting as a good idea
in terms of putting a layer between the user and the
direct GPT model, so he-- And Pablo can talk about it. His model is for lawyers. And on the back end,
it works with GPT-4, but make sure that
the lawyers have their data safe and secure,
so they're not just uploading private information to OpenAI. Next, we have Megan Ma, who
works with me as Assistant Director of the Stanford
Center for Legal Informatics, or the CodeX Center at
Stanford University. Megan has a PhD in
law and linguistics and is one of the leading
experts in AI and law. Next to Megan is
Daniel Schwartz, who is a law professor at
the University of Minnesota and has done really interesting
work on GPT and its use in law. And also, he did wrote a really
interesting paper with several co-authors where they tried
to trick law professors into seeing if they
could determine whether a exam-written answer or
written by GPT was as good as the student's. And he could tell you--
not really trick but see if they could figure
out and it could perform at the same level. I won't give the
spoiler on that. And last but not
least is Jason Adaska, who has been working
for years in AI and law and is the Director
of Innovation at Holland & Hart LLP. So thank you to this amazing
panel for joining us. I'm going to start out with
the first question to Pablo. So Pablo, what are
some of the abilities that this technology,
and we're talking about in particular GPT-4 and
large language technologies, can do compared to just
early last year and before? PABLO ARREDONDO:
All right, yeah. Thanks. First of all, thank you
guys so much for having us. This is a real
privilege to get to talk to you about this stuff. So we were shown
GPT-4 very early in September of last year. And my co-founder, Jake, and
I, basically, within 48 hours, had pivoted the entire
company to do nothing but focus on this. We had been working with
large language models since their inception
five years earlier. And we had been shown GPT-3,
which we thought was neat and had some cool tricks but
wasn't ready for prime time. What we saw in
GPT-4 was basically a literacy that was of a
qualitatively different nature than anything we had seen. And while you're going to hear
this called the generative AI revolution, I submit to you
that for the legal profession it's really not that they can
generate text that matters, it's that it can read it and
interpret it and annotate it, and structure it
and restructure it. So for example, we have
a Fortune 50 client that was a beta client and said,
we have these little nemeses, these little expert witnesses
that just wake up every day and testify about how
our products aren't safe. That's how they
make their living. Could I take every expert report
that this guy has ever done and every deposition
transcript they've ever done and give me questions
for cross-examination, finding inconsistencies
between what they said? And I said, look,
we'll try this, but I just don't think that's-- come on. That's a bit much. It's not going to work. Well, I was wrong. I said it was not
going to work twice. It worked again
for somebody else. So that ability to go
through and substantively identify
inconsistencies in a way that an attorney would find
those useful things, that is not grunge work. That is not just putting
on certain forms, filling out forms
in a certain way. To my mind, that's the thing
that we see GPT-4 doing, whereas even GPT-3.5 really just
wasn't up for it at the same level. And did you mention
the bar exam? Have we done that? HARRY SURDEN: No, go ahead. PABLO ARREDONDO: All right. So we were working with OpenAI. And I was like-- our colleagues, Dan Katz
and Michael Bommarito, had used the earlier model
and put out a paper called "GPT Takes the Bar." They should have
called it "GPT Fails the Bar" because a lot of
confusion happened from that. But the earlier model
failed miserably. It got in like the
10th percentile. Well, with GPT-4, we got
in the 90th percentile. We redid the study. And then, for good
measure, we actually included essays and the
Multistate Performance Test to write the full bar exam. So I think I'll end with this. ChatGPT is great for
raising awareness. But to my mind,
it's a little bit like, imagine a society
that had never seen cars. And then here comes
the first car, but everyone's just doing donuts
on the lawn and then doing 90 and reverse on the freeway. I'm glad you know about
cars, but using them well and responsibly is a
very different experience than using them incorrectly,
where you're typing in things and getting hallucinated
at and all these things. HARRY SURDEN: Well, that's
a really good point. One point I really liked
about what you mentioned was its ability to
read and synthesize as, something I
didn't emphasize, which is just
generating documents. And I really agree with you. That is a huge
game-changer in law. Let me throw out
the same question to the other panelists. What do you see are some
of the new abilities of this technology? And just put up a finger if
you're interested in replying. Yes, Jason? JASON ADASKA: Yeah,
so continuing along the same theme, we've
seen older models be very good at summarization,
those texts where you take some input, a lot
of different text, and needed to provide the
highlights associated with it. But the inference, the
reasoning capabilities, I think is unexpected not only to
users but to my understanding to a lot of the researchers
for this as well. That's the kind of delta where
it's not just raw tasks or not just sort of
document generation, but actually doing more
complicated inference and legal reasoning. It's surprising because
it is not the thing that you would expect a
language model, something that just has patterns and
language to be able to do. It's emergent just from
seeing lots of examples. So I think there are questions
that we have, which are, essentially, what
are the boundaries? How complicated of a
scenario can it get? And I think people who are
doing experiments right now are trying to understand
where that frontier is and how much it'll
change in the future. DANIEL SCHWARTZ:
The other thing I think is pretty
important to understand is, so far, we've been
talking in a sense about you ask the question,
you get the answer. And then that's
sort of the answer. And some of them are
good, some of them not. What's really pretty amazing
about it, in my mind, is that you can have this
dialogue with it, where you get it to further refine its
answers to match what you want. So a lot of times, its
first answer may not focus on what you want, or may
answer a different question, or you may actually realize
your own question was not great. And this happens a
lot in law practice. For folks who are an
experienced managers, you'll tell an associate,
hey, go write a memo about this issue or this. And they'll write it, and
you'll realize either they didn't quite understand what you
wanted, or maybe you didn't-- you weren't as clear
as you probably should have been about exactly
the scope of what you wanted. But because of how
quickly it works, and because it retains
a memory effectively, about that dialogue, you
can sort of in real-time get it to adjust to the
point where you want it. And I think that's something
that folks who have not actually really tried in a
sustained way to use this have realized. A lot of times,
you'll see, Oh, it produced the first
version of this, and it's not exactly
what I wanted. It's not this is not good,
or it hallucinated it. But if you sort of stick with it
and use it in a sustained way, you can get much, much
better very quickly at, say, having it draft a
contract where you can say, OK, we'll draft that
first version of contract. Gee, now find ambiguities
in what you wrote. Great. Now, please expand on that
one provision and have it create some incentive structure. So it really actually
can, essentially, replicate what can sometimes
be, in my experience, months-long process, where
you get a work product back, you say go back
and fix it, go back and figure out this answer. And you can't
anticipate always where the issues are going to be. You can do that in real
time in a matter of minutes. And so I think that's a
really important element of the technology that
folks need to work on because it's part of--
it is also part of-- it uses legal skills to be
able to realize where the deficiency is, where do I
want you to expand, where did you not take this and write in
quite the right direction I had hoped. HARRY SURDEN: That is
a really good point. I think, at least right
now, the technology you-- it takes some getting used to
figure out how to best use it. So at the beginning, I
was not very good at it, and then you learn the
things that it can do, and what it can't do,
and how to work with it. And I think you're absolutely
right to encourage people to not just look at the first
output but to experiment with nudging it down
the line, which is also one of the huge advances, as
I said earlier, that we talk about natural language,
quote, "understanding," because, again, it's not
a little human in there. But previous technology
could not reliably understand what you were
asking or correcting it. We've all used these
chatbots online. And you ask it, give
me a customer service representative. And it says, do you
want to order a pizza? So now GPT-4 very reliably
understands exactly what you want it to. Meghan? MEGAN MA: I also
want to maybe put out an alternative perspective
that these advances don't also come from nowhere,
as you pointed out. I think if you look into
areas of cognitive science and linguistics,
there are these traces of an ancestry where this
isn't unforeseen entirely. There's an area called
cognitive pragmatics. And within a subfield
of linguistics, you see pragmatics is almost
this contextual understanding. And you see this
area that emerged about what you perceive as a
conversation game between two humans. And part of this
nudging or being able to tease out
information-- there are existing sort
of techniques that are done in linguistics to
better understand and interpret one another. And we see that
actually, with training on human feedback,
that might have been one of the accelerators
that, as you rightfully pointed out. So while these are kind
of exciting advances, I think what's also exciting
about GPT, generative AI, and large language
models broadly is that a lot of these fields
that were in disparate silos are now coming into
a deep intersection. And I think that's
particularly what's making this especially interesting. HARRY SURDEN: Could you
say a little bit more about the connection between
instruction fine-tuning and other disciplines? Because I didn't know that. I think that's
really interesting. MEGAN MA: Yeah. So this is a paper that
predated just slightly the emergence of GPT. And there was a cognitive
science and a programmer, his name is Evan
Pugh, who looked into kind of the way in which
human and machines communicate. And he basically asked
a general question, why is it so unintuitive
for us to speak to machines? And over time, they
discover that it is because in the way
that we communicate and task things between humans. We actually set out goals. And then, it's a search strategy
to identify the solution to that goal. And so, they started
to mirror or find ways in which they
called natural programs. And I think that these
techniques actually had helped. And it came in the form of,
say, instruction fine-tuning. But essentially, what he did
or-- and his team did was they tried to train a dataset
and build a data set that is entirely built
on natural language instruction between one another
on abstract and reasoning tasks. And I feel like that really
played a role in the way that ChatGPT and
other large language models are now coming to be. HARRY SURDEN:
That's fascinating. And it's also a plug
for interdisciplinarity and working together with-- outside of academic silos. Any other comments on
the first question? All right, the next question
for Jason, so as we said, this technology is good,
but it's not perfect. And we want to make
sure we are very clear about the limitations. And so, as Jason, what do you
see as some of the limits? And what do you see as
short-term limits that will go away in the coming
years and longer-term limits for which we don't
really know what to do? JASON ADASKA: Yeah, no,
it's a great question. So I think there are several
limitations that people are thinking about, some
which I think are short term and some which are going
to be the longer term. The one that I think is on
the top of everybody's mind, and it's one that you had
mentioned at the beginning, was inaccurate or
outdated information. The way most people are
interacting with these tools now is essentially just asking
it a question out of the blue, not providing any context. And it, with the newer
models, is a lot better at being able to
not hallucinate, but it's still it
still can happen. And that's fundamental
to how these models work. They're probabilistic
pattern matching. There are a number of
techniques that can be used right now to reduce that. One of those areas you had
mentioned, which is not ask the system just to draw
from its general knowledge, all of those 175 billion
weights, what is the right case law, but actually provides
some context and some sets of documents for it to reason
about as part of the prompt. That's one technique
right now that's extremely effective for being able
to reduce hallucinations. There's also things that are
happening in the ecosystem that are extending what
these tools do natively by being able to pull
in external sources of information. So OpenAI has released
a beta version of what they call plugins. So you can ask it a question
and have it not just use what you asked what
was on trained weights, but actually provide connections
to real-time data streams. It will go out, and it'll
query and add to the prompt to get a lot better answers. So I think the verification, the
up-to-date information, that's a problem right now,
but I think just really in the next few
months, is probably going to be less of a
concern that people have. The other limitation,
and this one, this one's a little
bit amusing to me as somebody who's been
working in technology in AI for a while, is
these large language models are, in a lot of
ways, the dual of what computers traditionally
have been very good at. We think about
calculators and computers as being things that
crunch numbers, can do complicated decision
tree logic if then. And large language
models are based on these probabilistic
reasoning. So for instance, you may ask it
to do some reasoning about tax law, but you probably want to
double-check the arithmetic that it's actually using
to do calculations. Again, those are the things that
I think in maybe the next year or so are going to be resolved
not by any fundamental change to how large
language models work but by incorporating
other modules and plugins. So instead of just having
the large language model, having to answer arithmetic
questions itself, being able to use other pieces
to be able to solve those. Folks have probably
seen it play chess. It does chess relatively
well, but it's not going to play chess as
well as the Stockfish or some of these other custom
systems that are out there. The third thing, and
this is actually-- At first blush, I think it
may seem a bit pedestrian. But I think it's actually a
relatively important limit for a lot of legal applications. And that is something
that's going to sound silly, which is a buffer size
associated with these. So GPT-4 can have
as input and output something like 100
pages worth of documents and pages worth of words. There are a number
of use cases that are specifically
relevant for legal, where having the system be able to
kind reason about a large swath of input and be able to make
connections between some of those are going to be
important, and at least right now, is somewhat limited. There's engineering workarounds. But in general,
it's going to live. I'll give you an example that
our group has worked with. So if you are doing-- responding to something like
an obviousness rejection for a patent application, the
examiner responds back to you and says, I'm not going
to give you a patent. I'm going to reject some
certain claims because I've seen that there's a patent A
and a patent B that's out there. And a clever person
could combine patent A and patent B to describe
what you're claiming is a new invention. An attorney who is responding
to that now has to do reasoning, where you're looking across-- it's a triangulated information
about a whole patent-- one patent, another patent,
your own application and drawing inferences across. That's the kind
of legal reasoning that requires a big, big
working buffer of space. And right now, there
are some limitations. There's some hard limitations
for how many tokens or words you can put into this. There's also some
question of how well it's going to work to scale. I think the way we're used
to thinking about computers and technology is having
hard limits measured in things like how fast the
CPU goes, how much working memory do you have. I think this buffer
size is something that it's going to
be the next scaling parameter that the
engineers are working are going to have to actually
work hard to be able to expand. The four thing-- and this is
not so much of a technical item, but I'll mention
it because it's, I think, relevant for a lot
of people who are putting the technology to use,
as Harry pointed out law is one of those things
where invention of facts is frowned upon. So I think that there has been
two threads of conversation about using this technology
in law, one which is, Oh, my gosh, this is amazing. It can do legal reasoning. It can actually do
substantive work. What is this going to do
in terms of supercharging the practice of law? And the other thread,
which is, Oh, my gosh, it might make stuff up. And we need to stay
away from this. This radioactive. I think the
conversation about how to use the technology in a
trustful way is right now-- it's not a technology
problem as much as just a cultural
problem for people to understanding how to use it,
what are the right safeguards. I actually think that is
going to take probably a couple of years for people
to get used to, for clients to become comfortable
with, for attorneys to become comfortable with. HARRY SURDEN: Those are
really great observation. So thank you. Let me turn this out to
the rest of the panel. Anyone want to
talk about limits? Megan. MEGAN MA: So I think one
point that you made, Jason, that I thought was
really interesting is the idea of plugins
and different ways that folks out
there in this field are trying to almost mediate
for some of these limitations. And one paper that
came out recently that I thought was
really interesting was actually hugging GPT
where, essentially, they were trying to leverage
the strengths of ChatGPT, it being so great at
communicating with humans, and then using it
almost to triage to models that are built
for specific tasks. And I think that we see a future
where we don't necessarily have to have one
model do everything. Yes, we see Midjourney,
for example, as a particularly great example
at text to image generation. We're going to see, I think,
more and more of almost models becoming tour
guides and directing you to what you want to-- HARRY SURDEN: Could you
just say a little bit more about what does it mean
for GPT-4 as a model to talk to other
models for those who might not understand that
terminology or those ideas? MEGAN MA: Yes. So we're starting to see, almost
part of the emergent behaviors between models is their ability
to signal in a way various, I guess, tasks and to direct
and say, hey, we think that-- this is a particular task. And they parcel it
out and allow models that are-- so for example,
Hugging GPT, basically, they had leveraged the
fact that Hugging Face, which is this big
repository of machine learning models-- they
know that some models are better than others. And they use it as
a segue into others. And to be honest, I'm not 100%
sure of the technical elements behind that. But what I do see
is this ability to almost build layers
on top of models that are able to better
refine what are the tasks and what are the work
that is specialized for a particular field. HARRY SURDEN:
Yeah, that's great. And then maybe a way to
think about it, something Jason was saying, where the
GPT is the middle person who listens to what's coming
in and then decides, Oh, this is a math problem. I'm going to send it
out to a calculator. This is an image
generation problem. I'm going to send it out
to an image generator. So it's the middle. PABLO ARREDONDO: I
just want to elaborate a one that may have
come up earlier, but also the scarcity of
the chips to run this stuff. So Paul Lomio, who was the
director of Stanford Law Library, when I
was at law school, told me that in the early
days of online research, they would tell
the Stanford kids, you're not allowed to use
it between 11:00 and 2:00 because that's
peak New York time. So basically, you guys
can't do online research. It's New York's turn to do it. And what I've found is
that we find ourselves in something similar right now. We are literally burning through
these servers that we have. There's more people wanting
to use it than we can. And it's slowing things down. We're getting more servers. They're not only
extremely expensive, but they're also just-- it's not. You can't just order
as many as you want. They're like
partitioning them out. And so Casetext, we've
always been, for instance, all students could sign up for
free, all judges, et cetera. We've had to change
that right now. We've had to be
much more judicious on who we can get it out. And so I think there's going
to be some distributive justice issues with this, not the-- Westlaw brings its own
distributive justice issues, just fine without AI. But I think we're
going to see this as something that's
pronounced that getting access to the actual chips
to run, especially the really good latest models. I think the other
models are getting better and faster and cheaper. HARRY SURDEN: Let me
just follow up on that because that's a great point. Do you see that as a
problem of the moment? So right now you can
run models like LLaMA that aren't nearly
as good as GPT-4, but two years ago would have
required a data center to run, and now, you can run
it on your laptop. Two years from now, do you
see running something like-- PABLO ARREDONDO: Yeah, there's
so much capitalist pressure and evolutionary pressure. And, boy, that capitalistic
pressure could do a lot. So I think there's
a lot of incentives. But creating new plants
to create these chips is not an overnight thing. This is something where it's
just-- there's a lag time even when you decide to do it. And Microsoft is trying to
corner the market on some. And then, is this just a
complete geopolitical thing? But in the meantime, literally-- And right now, we're like, oh,
on demo, look how cool, yeah. It's not even working. It's so slow because
everyone wants it. And it's working
for right now, but I think that's going
to get old pretty quickly when they start to-- HARRY SURDEN: Fascinate. Dan, did you have comment? DANIEL SCHWARTZ:
Yeah, I just wanted to follow up on another
element of the-- because I think hallucinations
answers the question of, is the AI making up facts? That's one of the biggest
questions that are out there and biggest concerns. And one of the things,
and we've talked-- and Harry talked about
a number of techniques that can be used
to mitigate that and how the eyes
are getting better. But I think one the real
possibilities here is using the AIs to help you fact-check. So at the end of
the day, these AIs are good enough that you can
ask them, look, substantiate your claims, show me
the underlying text, give me the information. And so, actually, I'm
one of the privileged few who has been able
to use Casetext. And it has these
amazing technologies where it doesn't just
give you an answer. It will then give you the
quotations from the underlying documents. So that it makes actually site
checking, and the type of side checking you might do
as a young associate or as a law review
editor, relatively easy. And you can go
back and verify it. You can even do this within
GPT-4 now if you're-- It takes a little
bit more engineering. But you can say, look,
please give me an answer. And then the next questions
is, OK, provide me with a direct quotation
from the underlying source that I can see to
substantiate that. And then you can go
back and do your-- So there are tools in
place to facilitate the type of site checking,
the type of verification that you need to do that are,
really, you'd want to do again. You'd want to do that
with any legal document to make sure that the underlying
references to the cases are accurate, the underlying
references to the whatever it is the depositions, or the
underlying emails are accurate. I think that it is
much less of a hurdle than initially some
people may have made it out to be that these systems
hallucinate because you can also actually use, these systems
to ensure that what they are saying is represented accurately
in the underlying documents that you're providing it. HARRY SURDEN: Yeah,
thank you for that. It's a terrific point. Let me put the next
question to Megan. So many of us here are lawyers. And we're interested. Wow, can this technology
be used in law? Should it be used in law? So Megan, where do you see
these technologies being usefully deployed within law? And what are some of
the benefits and risks? MEGAN MA: Yes. So I think how I see this
question and put differently is what are the
relevant use cases? And more importantly, what
do I really have to do and in terms of changing my
own processes to accommodate for using these tools? And I think just by the
examples that you've shown, it's basically showing
remarkable performance. But I think the operative
word of that question really is "usefully," because a
lot of what people think about is even when there's any
technology out, it's like, will I really use it? That's why I defer back to
Word, that kind of mentality. And I think,
essentially, what we want to embrace in the
coming future is what the possibility of
having it leverage this type of technology to
do basic legal work, which will enable better
access to legal services, for example, or expansion
into pro-bono services. We think of it as being able
to help with legal aid clinics, for example, a separate
issue in legal diagnoses, and helping to kind of
service more clients in need. The other side of it is,
because a lot of what we've contextualize right now
is it's experiments, we're doing experiments, we
have lots of experiments, we actually haven't really
thought about integration into or practical integration
into our processes. And so until we get past
experimenting, that's when we really can move into
what we think is practical use. And so some of the
questions that we might be thinking of
asking is, yes, it's capable of drafting
contracts, but what then are the edits that our
associates or counsel will have to do on top of that? We might see that
they're conducting really complex legal analyzes. But how should lawyers then
react to this type of analysis? What do we do further? What is that next step? And we can re-imagine, for
example, new methods of IP being able to build legal
arguments, for example, or maybe it's creating
that first draft, that first template, and it
also offers these very specific, highly specified
annotated commentary that it took maybe a year or
two for a first-year associate that's entering your
law firm to then be able to pick
up what experience means in your law firm. Now, you're able to do that
through these annotations within your drafts. I think part of our center,
one of the ongoing research elements that we
work on, is trying to uncover how legal
expertise actually differs across seniority
and specialization. And so we're trying
to better represent what actually is that value
add that you get from seniority and partner level expertise. And what we're seeing
here is these models being able to maybe
capture these differences in legal opinions and use
them actually as a strength. We're allowing lawyers to
be able to gain new insights or expand their
critical thinking. And going forward from
that, we anticipate more of an embodiment. And what I mean by
that is we're getting into a space where we can
simulate circumstances of potentially negotiation,
litigation, or merger strategy before it happens and
other dynamic interactions that we weren't able
to gain play before. Or for example, we had a
sparring partner within the law firm, but now imagine that
kind of as a crowdsource type thing through these tools. But to the second
part of the question, there's no technology or tool
that we use without risk. But one of the main
issues, at least I see it, is that risk is ill-defined
in itself, especially in the field of
artificial intelligence and in large language models. So we heard questions
about data privacy. We know that, for
example, ChatGPT is being investigated in Spain. It's also being
investigated in Canada. In Italy, it's been
full-out banned. There's also a lack of
transparency around the data that it has been trained on. We know that it's been
trained on a lot of texts, but what texts exactly? And what are the weights? We don't really know. And there's also this "no man's
land around" the protections of using these models. So I use these models. What do I do from there? This is particularly
concerning, of course, when the information
is sensitive, of confidential nature. And we've seen, of course,
that Casetext with Co-Council has put in those guardrails. And so these concerns
aren't necessarily there, maybe for those who are in a
corporate, large-scale law firm setting. But what we're seeing
is even between ChatGPT, this free version,
and GPT-4, there are monumental differences. And so, the concerns
around data remain at large than for the everyday person. And so the point I made earlier
about access to legal services and leveraging these
large language models to minimize this gap
actually resurface if we have very large gaps
between the performance of these models,
the free version versus the paid-for version. And this is just a risk at the
foundational technical level. Risks also can be looked
at from the lens of use and interactions with these
models and the harms that can come out of it. There's actually really
well-thought-out taxonomies of harm and risk that are
being put out there, actually, by DeepMind, Google
themselves, and by communities of responsible AI and
AI ethics communities. But they kind of remain
at a level of generality. They don't translate well into
a specific domain, such as law. And the question even about
evaluations or auditing, you might hear from Anthropic
that they put themselves out there as robust,
safe, transparent AI, but what does that really mean? We don't even have a
consensus around what are the relevant auditing
tools or what evaluations we can benchmark against. And so, having these
limited understandings, I want to think about
the tools that we use that are most pervasive
in our everyday work. We think about Google Workspace. We think about Microsoft 365. Actually, these
companies are going to be integrating directly
large language models. And so, already right
now, in our practices, we have things like
auto-complete grammar spellcheck. But if you think about it,
that added layer on top, we talk about personalities. So Harry pointed
out the difference between even Bing and OpenAI. And it is kind of
aggressive nature. We don't necessarily know
what those personalities and how we're able to
necessarily negotiate and speak back with
those machines. And so that's probably
one thing that is a risk that we need to
be a little mindful about because harms don't
actually always come in the place of being
glaringly obvious. Sometimes they come in
very subtle, nuanced, and behavioral
nudges, such as those. So if you were not
someone who knew actually very clearly that
difference in IP questions and had that
argument with Bing, would you succumb to
what Bing has answered, or would you be able
to negotiate back? So I think a lot
more research needs to be done on actually
questions of contextualized harm and risk. And I think that
is what we'll have to do in this buffer
period as we look more into a large language model. HARRY SURDEN: Wow. Thank you for that really
comprehensive answer. And to your point,
I felt like I'd hit a new life flow when I got
an extended argument with an AI chatbot. So that was not my
proudest moment. Let me toss this
out to the panel. What do you guys think in
terms of benefits, risks? Jason? JASON ADASKA: Yeah, so I guess
in terms of incorporating this specifically into legal, one
of the things that I think is really interesting about how
this technology has caught on, and it's a little bit
of a maybe differs in some ways from
the point that you made earlier in
your presentation here, Harry, which
is, in the future, there's probably going to be
other technology that sits in front, and it may
not be a chat interface, I'd actually push
back on that a bit. It's certainly going
to be the case. This is going to be
in a lot of tools. But I think one
of the things that has allowed ChatGPT
to be so successful is that it's in an
interface that-- people don't have to
learn how it works. You're using natural language. In fact, even the
workflow for it drafts something,
"no, that's not right. Can you please fix this
particular paragraph?" That's the way the
attorneys work now. The change is just
that you're not working with a person in many cases. Now you're working
with a machine that happens to be
interacting in that same way. I think that what's exciting
is when you have technology that the people who
are using it don't have to change what they're doing-- it's the technology
has come to them. And I think that's what
we're seeing with the chat interface and large
language models, is you have this very
general-purpose Swiss Army Knife interface. You don't have to learn it. You don't have to
read a user manual. You don't have to know
what button to press. You just chat with it. And there's obviously,
as we talked about, ways to do that more
or less effectively. But they're really
around what you would use to talk to a
person or less effectively. I think there's always
going to be that. And that's actually
one of the things that will allow this technology
to be really transformative, specifically in legal, which
has a history of being pretty conservative in terms of, hey,
let's change how we're working, let's use some other tools. I think I think people-- since the tool allows
the interaction that are natural in
many ways, I think that's what's going to
help it get adoption. HARRY SURDEN: Terrific point,
and friendly refinement accepted. [LAUGHTER] Pablo? PABLO ARREDONDO: So I oscillate
between optimism and pessimism with this stuff as the
optimistic side to me. So when the computer
first came out, Isaac Asimov wrote
an article called "Who's Afraid of the Computer?" And he opens by
talking about Kepler and saying Kepler had these
great insights into how planets move. And then, he had to
spend eight months doing these tedious calculations. And can you only
imagine what Kepler might have thought of
if he had been freed from that tedious
labor and could have spent those eight months
kind of like shower thinking? So the optimist to
me says, can you imagine if we put all
of this grunge work down, all of this unnecessary,
tedious, repetitive, non-intellectual aspect
of law and let ourselves return to like the stuff
we learned in law school to think about? What are the policy
reasons underlying this? Having time to go find a Chicago
sociology study that shows that the actual predictions, these
things, this deeper advocacy that might be possible. So that, to me, is
the good outcome. The bad outcome is this
race-to-the-bottom, McDonald's-ization
of the entire field, where it's all just
cookie cut it out. It's good enough. And we lose some of
the artistry of it. And it might sound
strange to hear me talking about this as the
guy from Silicon Valley who's selling these wares. But-- [LAUGHTER] On the contrary, I was a-- I'm a lawyer. I still pay my bar dues. I think the legal
profession, though much of it is, unfortunately, a
shadow of its earlier self, still has a lot
of nobility to it. And so I would like to-- I hope that we can use
this stuff correctly, to then allow us to both give
more people representation and to really increase the
quality of representation. That's a good outcome. The bad outcome is just, yeah-- what I described a little. HARRY SURDEN: Yeah, great point. And just to clarify, we don't--
you don't sponsor Silicon Flatirons at all. And we are not, but you're
one of the first to-- in the space. That's why we're having it, yes. [INTERPOSING VOICES] Exactly. Yes, Dan. DANIEL SCHWARTZ: So
just going to what will be the impact on the
practice of law, I do think-- everyone has to make
their own judgment. But I do think there's a
tremendous amount of change that is going to happen
in the near term. And I think that it is
impossible to predict exactly how that will
play out because it is a byproduct of how the
technology will change, how different people will
use the technology, how different companies will change
the technology and build on it. And also, frankly,
laws and regulations, what will be allowed,
what won't be allowed. And so, in my mind, the
most important thing is for us, for
everyone, to stay nimble and to think about
both individually. How do you start
using this technology? Where would where would
you use this technology? How would you use it,
becoming familiar with it? I think every lawyer,
every law student, should be, at least, starting
to familiarize themselves with this. I think that there is
some amount of just time in building on that. And frankly, I think,
organizationally, a lot of firms, a
lot of schools need to be thinking about
maintaining flexibility to be able to pivot. And I do think there are a lot
of scenarios where maybe there are a lot of negative
scenarios where maybe you need fewer attorneys. Maybe there are, are less
hiring needs of big law, but then a lot of
positive opportunities where maybe there's
an opportunity to serve more people to
more-- because you can more efficiently, whatever if you can
write a will in an hour instead of 10 hours. Well, all of a sudden, now
there are a lot more people who you can actually help. And so I think that there
are huge opportunities. But there's going to be
huge change and disruption. And I think folks need to
start grappling with that now, both individually
and organizationally. And if you wait too long,
that might be a mistake. HARRY SURDEN: Those
are some great points. And one issue that we
hadn't talked about, but I think is important,
is opportunities for access to justice. So Megan and I are working
on a project at Stanford to help use some of
these new technologies to help underserved communities
who don't have access to lawyers to get help with
some of their legal questions. So Dan, your question
was great, that-- your answer is great that
lawyers need to get involved. And you've recently written
some scholarship about this. So what can lawyers do now? What should they--
how should they embrace these technologies? DANIEL SCHWARTZ: Well, that
allows me to plug my paper. So thank you, Harry. So I have a few different
papers looking at this. And I'm working on more. So one paper I have that is
more just a way of, I guess, a first process for
using this called "AI Tools for Lawyers,
a Practical Guide" that's on Google. But essentially, it
just walks people through some of
the basic things, like chain of reasoning
logic that Harry mentioned. But then it also sort of
talks through how can you-- even really practical stuff. If you have a case that's
too long to plug into it, how can you plug in
that case into GPT so that you can
actually get it to think through the entire
case and analyze it? Or how can you ask it to
cite the relevant provisions, something I alluded to earlier? How can you get it to not
only draft the contract but then identify
the ambiguities and then clarify
the ambiguities? So it just walks through
some basic techniques. And it's really
designed as a way to get lawyers and law
students to start actually familiarizing themselves
with this technology. I think in a year. It will probably
be very outdated if it's not already outdated
now, which I hope not. But I think that the first thing
to do is use this to-- frankly, spend the $20 a month to get
GPT-4 because it is really different than GPT-3.5. So I think everyone
should be sending OpenAI their $40 a month. HARRY SURDEN: Not
Casetext, the 500 a month. DANIEL SCHWARTZ:
Oh, yeah, I know. And Casetext if you can get
it, if you can on their weight, pass their weight-- HARRY SURDEN: We're also
not sponsored by OpenAI. [LAUGHTER] JASON ADASKA: And start
using this technology because there is a craft to it. And there's also
just an understanding of where it will help and
where it won't help and how can you use it better. So I think developing that skill
set is an important first step. And then one of the
things, obviously, I'm a law school professor, and
actually, I came to this a lot because I started
working as a fellow before I became a professor
teaching legal research and writing. And so I think it's
really important to start thinking about how to
train our students to use this well. But I actually am of the
opinion that the first thing we need to do is to teach
them to do legal research and writing without
this technology. I think it is--
one of the dangers here is over-reliance on
this technology to the point where you can't understand
what is doing well and what it's not doing well. And I've heard people
make this analogy. And I quite like it. We teach kids how to do
addition and subtraction, and multiplication
before we give them access to a calculator. And I think that
is very important because there's a way in which
those skills are fundamental. And it's even important
that even though we can trust a calculator. Well, imagine a calculator
that makes mistakes some of the time. And so I think right now
that pedagogically, we need to be teaching students
how to do core legal research and writing, how to
analogize and distinguish, how to write clearly,
how to synthesize rules for multiple cases and apply
those rules to [INAUDIBLE] facts in a sort of compelling
fashion and leverage policy arguments. But then, once we
have that foundation, then allowing them to
use this technology to further refine that and
make it more efficient. And so one-- going
to one study that. So we did one
study where we just looked at how chat those
ChatGPT did on law school exams. And when we found it
got about a C-plus. But it was able to get a C-plus
in a variety of different areas where we just used a single
prompt, single prompt that was used for all different exams. The paper is called "ChatGPT
Goes to Law School." And it's already
performing at the level of a not-very-good law student. But still, in an
employee benefits class, in a constitutional law
class, in a torts class-- so we have a new experiment
that we're working on now, where what we're going
to do is use GPT-4, but we're going to use
it and have students use that in concert
with their own skills and see what type of
difference that makes in terms of their ability to perform. And the hypothesis, we'll have
to see how things play out, is that GPT-4 is going to
allow law students to not only perform much better on
exams and analytical tasks, but we're also using-- we have a separate
experiment we're doing it on simple legal
task, draft to contract, draft a memo, draft a complaint. And what we're I think
what we're going to find, again hypothesis will see,
is that this technology allows them to work much more
quickly if they're trained well and to work much
more efficiently. And so I think that
it's a process. And we need to
train law students. We need to train get lawyers
to use this technology well. And that will take some thought. But I think that if we're
thoughtful about it, it really does represent
a huge change in how lawyers are going to work. HARRY SURDEN: That was a
really thorough answer. And you made a bunch of great
points, particularly about everyone should-- I agree should be at least
trying out this technology and testing it,
particularly GPT-4. And I will say there is a
way to get a version of it for free using Bing Chat. It's not the same thing. But it kind of gives you a
sense of what's going on. But don't put your private
client data in there. Yes. Any other comments on that? What should lawyers be doing? Pablo? PABLO ARREDONDO: Yeah, again,
I think steady as she goes. A lot of the stuff you guys
are learning are doctrinal. And these are century-old
principles and ways of legal reasoning that,
frankly, aren't really impacted by technology. And I think you need to really
have those down solidly. And I would say
that I would rather have a mind that had to
wrestle with the blank page from scratch, and clumsily
futz around and a strikeout and then take longer,
but then learn how to go from complete blank to
an ordered system than somebody who thinks that they're, adding
better adverbs to a draft that GPT-4 comes out. So make no mistake, you
guys are not on the clock. No one's paying you per hour. Once you're practicing,
there's other constraints. If I can do it faster,
it's not about me. Your guys' number one job
is to create the brains and minds that can
advance the profession and serve the rule of law. And my personal Toobin on this
is, yeah, learning how to, hey, have it make a draft,
and I'll edit it. You know how to do that. It's just like if your friend
gives you something to edit. It's suffering through that
pain of that blank page. I don't-- You might get around it. But ask yourself
who suffers there. My two cents,
probably not the view of Casetext's marketing
team, actually, if I asked him about it. HARRY SURDEN: Yeah,
that's a good point. It raises a larger issue
that lawyers are not contract-drafting machines
or document-- they're advocates and problem-solving. Problem solvers help people
through the legal system. So those skills, in conjunction
with the basic skills, are still going to be necessary. Did you have a comment? No? So we have one
more question, then we're going to open up to
the audience for questions. So we hesitate to speculate,
but we'll do it anyway. Where do you see this
technology going in law or elsewhere in the
next two to three years? And I'll just throw
this out to the panel. And I picked two to
three years because this is changing so rapidly. I don't even think
five years from now we can do a reliable
prediction, let alone 20. PABLO ARREDONDO:
Yeah, I think it's going to be much
more quickly adopted than anything we've ever seen. Law is a conservative
group overall. But if what I've seen
over the last seven months is any indication, they seem
to be making an exception for computers that can read. And I think you'll see
widespread adoption. I think that you'll
start to see some fraying of the billable hours, some form
of the business will change, and that probably will
impact how many people are getting hired and for what. You might literally start to
see that move pretty quickly. And I don't mean that
in a doomsday way. I think there have be maybe
different distributions of associates doing
different things. And I think we'll all be use-- Yeah, it'll be on our phones. It'll be just
second nature for us to be using these LLMs to do
the vast majority of things we're going to do. And I think we'll
find it quite joyful. I think it's going to be
a very wonderful feeling to have an AI that can
schedule a damn calendar event, and then adjust for the Eastern
time versus Pacific time, and do all of these things
that if you actually add it up in our life, we're dealing with. Great. Dan? DANIEL SCHWARTZ:
So I remember when I was a young associate in, I
guess, 2004 doing discovery, and we didn't even
have e-discovery then. And I remember
sitting on my computer and literally just doing-- looking for keywords
for hours on end, and then billing clients
thousands of dollars. And I am like, how
did I go to law school and do a clerkship to
become a trained monkey? And I think that
there's still a lot of that in the practice of
law for young associates. Let's be real. And I think-- so
I do think there's the real potential
for this technology to allow lawyers and law
students to have more fun in doing their job and also
to have more work-life balance. I don't think that this
technology is going to put lawyers out of work. I just don't. Do I think it will
change demand? Do I think that there may
be some reduced hiring needs at some places? I do. But I think that
there's actually a lot of ways in which
things like soft skills are going to become
more important. Can you communicate with people? Can you translate what's on
the page into an explanation? Can you develop relationships? Can you be an advocate? Can you can be a
strategic thinker? I think those are actually going
to be the more prized skills that lawyers are
going to need to have, and law students are going
to need to cultivate even in the next two or three years. And I do think the
practice of law for many will get
more fun because we can automate what is
still a pretty grueling process in some element,
like the discovery or produce a complaint that-- of the type that if you're
just an auto accident lawyer. It's the same complaint. You're just copying and
pasting funny things. You can do that now without
having to spend an hour copying and pasting, or if
you're writing something, and it's the summary
judgment standard, you've written it 8,000
times, you can just tell GPT, OK, do that for me. So I think I think that
there is a lot of hope but also a lot of risk, even
just the next few years. HARRY SURDEN: So lawyers
having 20-hour workweeks? We'll see if that happens. But no, those are
some great comments. Jason? JASON ADASKA: Yeah. So I think the-- echo the statements
of the other speakers, in terms of what the
future's going to look like. I think it's definitely
going to be everywhere. Right now, we see it
in a couple of tools. I think it's going to-- in terms of technology for
law, it's just literally going to be everywhere,
either on the surface or underlying it. In terms of impact,
I would expect that the transactional
practices, especially those that have fixed fee
models, are going to be most incentivized
to figure out how to take advantage of that. So I think that's probably where
they'll be in initial adoption. And one of the things
that I think maybe seems strange to
consider right now-- GPT-4 came out in March. We all have a little sense
of vertigo, of quickly, this has changed. I think we're going to be-- I think we're going to adapt. We're going to adapt and
almost be bored of this in-- even by the fall. We're currently
amazed by its ability to be able to address summary
judgment or draft a patent or do analysis. I think people are very quickly
going to mentally adapt to, OK, here's a set of tasks before
that I had to grind through. Why am I doing that? I should be using some
tools to either help me do quick summaries of
things to do outlines, to draft small pieces. And I think it's
going to quickly find its way into just things
that people take for granted. HARRY SURDEN: Megan? MEGAN MA: I think,
well, not only do I agree with the
other panelists in terms of their speculation
for the future, I think another interesting
area of what large language models could do is its
ability-- and Jason teased this, is its ability to tease
out our implicit behaviors. I think a lot of our existing
legal work is actually-- We think about some
of the best clauses that we've ever written. We tend to like to keep
those to ourselves at times. And I think that if these
models are integrated into our workspaces, such as our
Google Workspace or our Office, we might start to see the habits
in which we've taken over time and how we draft. And I think that that's going to
be really interesting in terms of the future of being able to
almost adapt and change the way that we act or
behave as lawyers. And so I think that is
going to be an area that is particularly interesting. HARRY SURDEN: Great there. Well, those are all great
comments on our speculation. So let me open it up to
the audience for questions. We have a tradition here
at Silicon Flatirons that our first question
goes to a student. So, Oh, we've got a student
very eagerly volunteering. Terrific. And we encourage more
student questions as well. AUDIENCE: Hi, my
name is Christine. I'm a PhD student in
computer science here at CU. I am wondering what you think
some of the practical solutions might be to the current
disparity in access to this technology,
either directly or as a third party,
fourth party beneficiary? PABLO ARREDONDO:
We need more chips. We need better GPUs and more
of them as fast as possible. I think that's, to my mind,
one of the things that's making it very expensive,
just literally running it. Maybe I wasn't sure. Are you looking for like a
technical or a societal answer? Both? AUDIENCE: I heard whatever
[INAUDIBLE] saying. HARRY SURDEN: Yeah, I
have a thought, which is, there should be-- the government
does need to get involved. This is a big enough
technology like the internet. And I think we-- somebody said there's
been ideas about having an Apollo program, and not
just around the technology, around ethics, and governance. And there, I think there
should be a public free option. This is an important enough
technology down the road, like electricity
or running water. Not immediately, but
I think the government needs to get involved. Yeah. MEGAN MA: Sorry. I think also as well there's
an important question around open source and licensing
of certain underlying models. We've seen most
recently, Dolly came out, which is an entirely
open-source model. I think there needs
to be more of that. And a lot of these
models are built on LLaMA, which is a model
that Meta put out there, but the licensing issues around
that are not well-defined. And I think until we
resolve those questions, then we can have definitely
more space for free access to these models. DANIEL SCHWARTZ: I'm less
convinced that it's imperative that everyone get their
hands on these models and use them because I think
that the value of these models really depend-- and the use
really depends on context. And I think that one of the
great things about these for lawyers is how much more
efficient it can make law, and that then can actually
have huge distributional consequences. So there's this
well-known fact in the law that we have too many
lawyers on the one hand, we have hugely not enough on the
other because there's so many people who don't
actually have access to legal services who need
them for wills, for divorces, for custody matters. And I think that-- I'm not sure that we're
ever going to get to a point where people are going
to be able to or at least in the next two or three
years, let me put it way, where you don't even
need to hire a lawyer. It's just, Oh, I have a will,
and I need this, and boom. There's some tools that
purport to do that. But where I do think this
can be transformative from a distributional
perspective is maybe I only need to pay a
lawyer $100 to produce a will because, for
them, it's literally just a matter of getting
a few data points, plugging in and
then just checking. And making sure that
it's doing what you want. Do you think we need the
human in the loop there who has some expertise? But that expertise
can just be a matter of let me spend 15 minutes
reviewing the output and fine-tuning it,
saying, Oh, expand this, or add this, whatever else. So I think that distributional
concerns may actually not lead to the, we need to
give everyone access to this all at once as opposed
to we need to make sure that folks are using
these tools in a way that allows them to
achieve efficiencies that can serve a wide
subset of the population. JASON ADASKA: Yeah,
so I'll just add. It's clear that this
technology is a democratizing force for lots of
specific information that previously
people would have had a hard time getting to. The risk is hallucinations
and whatnot, as people may not be able
to trust all the information that's coming from it. So I think the bottleneck right
now is anybody can go to Bing. And you can talk to it. You may or may not get
what you want out of it. I think figuring out how to get
the correctly curated versions of this out to the right people
is honestly an open question. I think it's going to
require the right experts and the right
regulations in place to be able to get a
vetted version of this for the different domains
where it might be applicable. HARRY SURDEN: Another question
from the audience over here. AUDIENCE: Thanks. Am I supposed to
introduce myself? I'm not going to do that. So you've talked
about how this is going to impact the practice
of law from the, I would say, internal perspective. And I know this is like
a whole other panel. So I'll try to ask
the question, then give a more specific example. But do you have any thoughts on
the current impact it's having? I can speak for
myself personally, of advising your
clients, especially if you're in the
technology sector or work in-house on the
using of these tools. Or is your company
developing a product that uses them, which they
probably are, by the way? And I think the
context I'll give you just a narrow that question
a little bit is software. You mentioned code generation
tools, of which, of course, there's Copilot everyone's
heard of, but there's many and models around that. I think that's most relevant
to this audience, both from a legal and the
technology sector on just on-- that's part of the
practice of law. It is not just how we
might use them internally, but how are we
dealing with advising on both sides of that coin? So I just wonder if you
guys had thoughts on that. DANIEL SCHWARTZ: I think
the most obvious thing is you need to have a policy. You need to have a policy. So there are so many
employers out there that I think don't even
have a policy for this for their employees,
for instance-- or that just ban it. And I think, actually,
in the very short term banning its use may not be a
bad idea in certain contexts. It's not clear to me that-- given there's so
much uncertainty, it's not clear to me that we
want certain employees to be using it to do their job
right now because we don't know exactly how to vet it. But I think that, at the very
least, you need to address it. And I think that there are a lot
of-- you need to do it quickly. But a lot of times,
systems, over time, have developed to produce
internal policies or internal-- And they take time. And I think we don't
have that time right now. You need to have at
least a basic policy in place, like for universities
or even for my students. For right now, I'm going to ban
them using GPT on their exams, for now. And then maybe we'll have
a class where we teach them how to use it and
we're using it, but I think I think
just addressing it is the first step. PABLO ARREDONDO:
And I would just say educating yourself
about, again if all you saw was people doing
donuts on the lawn and going 90 on the freeway
and backwards, you say no cars. How about that? Seems like a very good
idea in that world. And I think, really
going and understanding how these things can
be used responsibly, which means secure servers,
where the data is not retained, where you're not
feeding into the model, where you're using you're
coupling into a search engine to go over hallucinations,
where there's guardrails to check quotes,
all of these different things, I just think you need to
learn about them because it's a very different world when
you're using them correctly. HARRY SURDEN: Great. Another question
from the audience. AUDIENCE: Thanks. My application is healthcare. I'm thinking about the very
direct utility of your lessons to a physician trying to
do the same kind of stuff with the same kind of problems. One of the threats that
we deal with in medicine, though, is a direct-to-consumer
application with, the bypassing the provider entirely. And they got access to
these amazing tools. 100 years ago, we would say
it was illegal for a consumer to have a stethoscope. That's ridiculous. Now you've got to can
get an ultrasound. You do your own. It's not just
democratization, but there's going to have to be
some thought about. Do we have a way of making
these apps safe for consumer direct use, where
the practitioner only learns about them late? You come into the ER. You're appendectomy
is halfway done. And we need to figure out-- [LAUGHTER] --where did Chat--
you know, leave off? So then, I would throw this back
in because I don't know if you can answer that for medicine. Good luck to try. But in law, you got
this pro se thing. And what happens when a
murderer shows up in court and says yeah, I got my
defense all prepared. Don't worry, judge. Here, I got it all here. I'll hand it to you. Ready. I'm innocent. So in any profession
you'd like-- engineering, aerospace,
[? corrections-- ?] pick the one the fun ones. What happens, though? We can't control this
entirely as professionals? It's already long out of the-- DANIEL SCHWARTZ: So let
me-- yeah, go right up. [INTERPOSING VOICES] PABLO ARREDONDO: --let pro
se people use our service. You have to be an
attorney because of that. But also ask yourself, you can
get pretty informed by a search result on Google
and think you're ready to go into your search. Do you know what I mean? So ask yourself,
how much of this is just consumers having
access to information that they could
then foolishly think suffices to make an informed
professional decision? And how much of it
is actually about AI? I think maybe it's
a matter of degree. But we don't want
people using it unless they're attorneys
because I think it does give the illusion
of maybe being more concrete legal advice than
it actually is. It needs our
attorney's oversight. AUDIENCE: So you're going
to make it illegal for-- [INTERPOSING VOICES] PABLO ARREDONDO: --to
get our revenue up. We'll finally be-- [INTERPOSING VOICES] HARRY SURDEN: Anyone else on the
panel want to comment on that? DANIEL SCHWARTZ: I
think licensor issues are really tricky. I tend to think that they've
been abused In many settings to actually protect incumbents. But at the same time,
I think that they're necessary in a variety
of settings as well. And so I think that
we will continue to rely on licensure to
ensure that, pro se is tough, but to ensure that, you
can't just hire your friend to represent you
and that you have to if you're going to have
to get medical treatment, you have someone who's licensed
and knows what they're doing. But as I said, I
think it's tough. I think that we'll also see
the abuse of licensure rules to protect industries that
maybe should be shrinking. And whether that's
law or not, I guess-- I'm not sure. But I do think that
that's a possibility. And so they're just
tricky issues here. I don't know. I really don't know how to-- what the right answer is. I think it's going to be
very context-dependent. But I think licensure is the
biggest answer we can provide, as well as warnings. And we have the
warnings there already. Some of them are-- Google's more aggressive
in its warning about what Bard will do, saying,
look, don't trust this at all. This is completely-- We don't have that
for ChatGPT or GPT-4. They're actually fewer warnings. And so I don't know. I don't honestly also know how
effective those warnings are. It tends to be most warnings
are not that effective. MEGAN MA: I just want to-- HARRY SURDEN: Oh, go ahead. MEGAN MA: I just
wanted to go back to your question on medicine. So a while back,
there was a tool called Babylon Health, which was
trying to triage and diagnose these medical symptoms. And purport to say
we have 92% accuracy, the average experienced medical
professional of 30 years is 85%. But what we really get
from this information is what do we really want
from our professionals? Is it that when you
receive this information, is it's accuracy the only
thing that we're weighing? Doctors, we want that
empathetic angle? What if you receive
bad medical news? This machine is not going to
give you that same empathy. And so I think when it
comes to making that analogy with lawyers and
whatnot, we need to really be rethinking
what our role is as a lawyer that extends
beyond the information that we are communicating. HARRY SURDEN: That's
a great point. And one additional point I'll
layer on top of that is we always want to weigh the
benefits and the harm. So those are some real harms. But also today, people
are being harmed by not getting medical or legal
advice that they can maybe get in this new world. Well, we are out of time. So please join me in
thanking this terrific panel. [APPLAUSE]