When students go to university
to study mathematics, the undergraduate fantasy that we teach
is that we have this bedrock of axioms and everything is built solely on
these axioms through deductive argument, and that we have a towering edifice
of brilliant mathematics, all of which is solidly established
in an incontrovertible way. it's a beautiful kind of conception, the problem is that's not true. It's not even close to the truth. It's almost impossible
to live up to that fantasy. Some of the top mathematicians
in the world have been looking at this as a philosophy question, I've been actually working with A.I. If you start always inputting your ideas or proofs into a machine,
at what point does the machine do a better job of helping
guess the next step? Obviously this is something very
hot right now. It's moving fast
and the question becomes, well, what is it we want from proofs? What have we
historically needed from proofs? What is it we believe when something's proved and how does AI change that? These are massive questions that that are popping
out of the woodwork right now. I'm Andrew Granville, I work in analytic number theory. When I was young. I worked on Fermat's
Last Theorem before it was proved. Today, I work in ideas of L functions
and multiplicative functions. But I'm interested
in all different aspects, including computational
and algorithmic questions. I've long been interested in popular
writing in mathematics. My sister is a writer. We've been thinking
it would be fun to do a graphic novel in mathematics, so we worked together
to develop it and eventually wrote it. In the graphic novel,
I felt like there was room for us to develop philosophy, A philosopher of mathematics. Michael Hallett, read it and he was very interested in the way we portrayed mathematics being done. You cannot prove them
because you cannot go back to anything more primitive
than those propositions themselves, they are not in need of justification. Talking to Michael,
I realized there is this debate of what it is we mean
when we say we've got something proved. So Aristotle said that to prove
something is true, to establish it's true, your argument should rest on what he called primitives, things
you already knew to be true. And eventually there's got to be some atoms there, the so-called axioms
that you rest all your arguments on. the basic things you cannot define,
so like point, line, plane, if you're talking about geometry,
you can't define those. What you do instead is lay down
some axioms about them. The phrase that comes to my mind
is these truths are self-evident. Um, which, you know, is interesting at many levels when you compare it to mathematics. I mean, there's a certain arrogance
about it. The way it's traditionally been done
is that we publish papers and there's papers that put in journals or books,
and those books are put in libraries. And then when we wanted to verify a truth,
we go to the library and we check out the book and we'd
see if it was exactly as we wanted it. It would help us gain new understanding, or we'd have to revise
what was in those books. Today, The artificial intelligence
that is used for proof they're doing something similar, except
they're going to store that information within the program. So programs
like Lean have a library of things already proven based on axioms
and then with suitable work, and it's complicated, a mathematician can input their proof
into Lean and verify all the steps of logical,
at least according to Lean. So what does Lean do? I think I have a proof of a theorem
and I want a formal proof. I've written something that I am a bit
iffy about. I don't want to get into all of the details
of the ins and outs of know, as you sort of go back to the Tree of Knowledge,
everything's really going to work. So the idea is to input that into Lean. It acts like an obnoxious colleague
who just won't let go. If they don't understand, they will not
stop pestering you with annoying questions. But sometimes those colleagues
actually help you because by explaining it to them
you realize, Oh yeah. Oh, it's simpler than I thought. This is particularly famous example
by Peter Scholze. He had a very, very difficult proof
that he wasn't 100% sure of. He would try and say, well, this is true and you should know this Lean and Lean would say, I didn't know this at all. You're going to have to explain this better. The bits where he had been uncomfortable
with what he'd written, it was where it asked the most questions. So he felt okay, that was verification
and it was sensibly asking the right questions. So it's an exciting time. The question is, will machines
change mathematics? How do you get computers
to lead you in a proof, not just follow you or make suggestions,
but prove things itself? This is something in its infancy, a lot of the work on computer
generated proofs up to now has not done much of consequence. Very interesting within itself,
but not in the bigger picture. But there are some very interesting,
exciting new ideas, which one has some optimism
that we may see the start of generating
interesting new proofs? And so this starts
to become a little frightening, I mean, we value our profession. We value our work by the profundity of proofs. But if we can rely on a machine to get all the details
right, well, most of the details right, then who are we? What is our training? What is it that we value? So when we don't need to think about proof, then we're not going to be trained
in thinking about proof. So who are we going to become? We're going to become more like physicists,
probably and say any old nonsense, and just hope the computer verifies it. I think it's very hard to predict
what even will be the point of doing mathematics in 30 or 40 years time. The idea of computer generated proofs these present new possibilities. Over time, we see, of course, more and
more things that computers are able to do, and its very unclear what limits there are.