Welcome back, everybody. Please come in as
fast as you can. So I'm very, very pleased to
introduce Tim Palmer, who's a fellow Brit. He is a Royal Society Professor
at the University of Oxford, and he studies dynamics, and
predictability of weather. And he's a pioneer of
probabilistic ensemble forecasting techniques. And he bridges between a
deep understanding of theory, and practical forecasting. And for many years, he
worked at the European Center for Numerical Weather
Prediction, ECMWF, which is the foremost medium
range weather prediction center. Over to you. Thank you very much. Let's see. I want to start by posing
a kind of slight conundrum, I guess, which is that we
have these two great figures, who, clearly, their interests
are very closely related. As has been mentioned,
Jule was a founding father of numerical weather prediction. He led the team at
Princeton that gave us the first numerical forecasts. Ed transformed our understanding
of predictability of weather. And yet, I would say for
many years, if not decades, there was very
little interaction, certainly between the two fields
of science, that these two pioneers created. And I want to discuss
the reasons for that, but conclude on a
more positive note, and say that things
are changing very rapidly in the last few years. OK. Not working, it gives
me a [INAUDIBLE].. OK, so just to
review very briefly. This was indeed the team
under von Neumann, who I suppose took the concept
by Vilhelm Bjerknes, who's been mentioned, Lewis
Fry Richardson in the UK, and maybe one or
two others, that one can treat weather forecasting
as a traditional scientific initial value problem. If we have enough
observations to determine initial conditions, and if we
know the equations of motion, which we do, then
in principle, we can determine some
future state from that. But it was clearly
obvious that to do this in any meaningful
way, to advance predictions faster than the weather
advanced themselves, we'd have to rely on
technology that before the war, didn't exist, and after
the war, started to exist. So digital computers. And from those first days,
numerical weather prediction has really not looked back. My own institute,
the UK Met Office, where I started to
work in meteorology, embraced numerical
methods in the 1960s. And around that
time, people began to ask, well, how far ahead
can we forecast the weather. People like Kiku Miyakoda, for
example, and others at GFDL, and elsewhere, became clear
that maybe around 10 days was a good sort of
time scale to think about the practical limits
of weather forecasting. So this notion of
what was sometimes called a deterministic limit
of numerical weather prediction began to arise. And the whole kind of
concept underpinning the European Center for
Medium Range Weather Forecast, which, as John said, I
worked for many years, was based on trying
to kind of realize this theoretical idea that
we could predict weather 10 days ahead. A recent paper in Nature a
couple of years ago, I think, summarizes the whole
evolution quite well. In that there has been,
from those early 1950s, a revolution in this
particular field. But it's been a kind
of quiet revolution, and the public at
large are probably not really aware of
what has happened. It's been not only
the sophistication of the numerical weather
prediction models, it's also about how
satellite data has really transformed our ability
to create accurate sets of initial conditions. And that's, by the way,
illustrated on the right hand graph, which shows
how scale scores have been rising over the years. But it also shows how forecasts
in the southern hemisphere, which were traditionally
much poorer than in the northern
hemisphere, have become pretty much the same level of skill. So those colors
disappearing means there's no real difference
between forecast skill and the northern
and southern hemisphere. And that's just basically
due to satellite data. And that, by the way, has needed
very sophisticated algorithms to actually simulate
that data into the model. So it's all very, I
would say, encouraging, and it's a nice story. But you know, as
everyone knows, there's an Achilles heel in this
type of weather prediction. And that is that
sometimes, it goes wrong. And when it goes
wrong, it attracts the derision and really ridicule
of public, and media alike. On this side of the pond,
you have lots of examples, no doubt. I'm going to just
focus on my side. So this is a very famous
event for those of us who lived through it. This is a storm that famously
reduced the town of Sevenoaks in Kent to no oaks. And caused untold damage across
the whole of southern England, which I'll show you. But was completely misforecast. Even the night before-- it hit in the early
hours of the morning. Even the night before,
the forecasters were just talking about a
little bit breezy the next day. So this is an example of
the derision and ridicule that the poor forecasters
had to endure. This is the main BBC anchorman. The next morning-- you chaps
are a lot of good last night. If you can't forecast the worst
storms for several centuries three hours before they
happen, what are you doing. Well, meteorologists are
a kind of resourceful lot. And the Met Office
made hay out of this by saying to government, we
need much bigger computers so we can increase our
resolution of our models, and make much better forecasts. This argument has
been used many times. Maybe not that long
ago in another case. So there we are. So I just want to
let that hanging now for just a few minutes, and
move on to talk about Lorenz. Let's leave that
as a hanging issue. As Kerry very
nicely mentioned, Ed showed how, with a
very simple system, deterministic forecasts
would decorrelate. I don't know if this works. Yes, there we are. So here's an animation of
two trajectories of Lorenz '63, which look like
they're the same for awhile, but then completely decorrelate. And as Kerry said, Ed's
motivation was basically to show that simple statistical
methods, like analog methods, find an analog of
the current month, and use next month's
from the analog to forecast the
weather for next month, these were doomed to failure. So I got involved in this type
of work in the early 1980s. Joe Pedlosky had talked
about Jim Holton. I'd actually been
working with Jim Holton on stratospheric dynamics at
the University of Washington. And the vagaries of the
Met Office in those days were that Jim Holton wrote me
a nice letter of reference. So I immediately then,
as a result of that, got posted out of the
stratospheric branch to the long range forecasting
branch, about which I knew absolutely nothing. So I had to absorb
what was going on. And what was going
on in long range-- so this is 30 day
forecasting-- was that people had taken
aboard Lorenz's message. And the models were
empirical models of the type Kerry mentioned, but
the output was probabilistic. For example, the models my
colleagues in that branch worked with would
predict probabilities for different types of
weather regimes, what they would call
lamhe weather types, but today, we
would call regimes. And these would be given to
utility companies, energy, water, gas, and so on in terms
of the next month's weather as improbabilities. So my job was to try to bring
numerical weather prediction models into this
milieu, if you like. And I was aware, for example,
of the work of Scheuchl, who is in the audience,
and again, Miyakoda, showing how maybe
numerical models could play a valuable role
in monthly forecasting. But the problem was to
get it into a state where it could be used, blended with
these probabilistic empirical models. So from that point of
view, it was obvious that what we had to
do was run ensembles, run from consecutive
analyzes, if you like, 12 hour analyzes, produce ensembles-- I think there were about
nine members long-- look at weather regimes
within those ensembles, and produce probabilities,
and then merge those into the statistical ones. So this was completely
non-controversial. Everybody said, this is
fine, this is obvious, this is what we should do. And in a journal, which
is no longer functioning, the Meteorological
Magazine, we described our first operational ensemble
forecast in November, 1985. So at the end of
the '80s, there's this real kind of brick wall
between the numerical weather prediction on time scales
of less than 10 days, and these probabilistic
methods, which combine empirical
statistical models and the emerging idea
of ensemble forecasting on the monthly timescale. And very little
interaction between them. But it became obvious to
me that this brick wall was very artificial, and didn't
actually make much sense. And Lorenz's model is
actually a very good way to illustrate the concept. So what we're
looking at here are, let's call them short range
forecasts from Lorenz '63, where we're not just
running a single trajectory, but a little ball, if you like,
or a little sphere, something like that, of
initial conditions. Now the top left is
something that you might say is fairly typical on
those time scales, which is that there is
actually very little divergence of trajectories. So this notion of
exponential divergence is not something that
happens all the time, for all the initial conditions. The top right is
one where you start to see some growth
of uncertainty, but it's still
kind of manageable. But then, and this
is the crucial point, there are initial conditions
where the butterfly effect really hits you hard, even
within this time scale where you think things
are deterministic. So that's the bottom figure. This is characteristic
of a non-linear system. This is as simple as that. In a nonlinear system, the
growth of initial perturbations will be dependent on the
state you start from. So I've always felt this
notion of a deterministic limit was a little bit of
a misleading concept, and it sort of prevented the
sort of synthesis of Ed's ideas into the shorter time scales,
where they would apply. Now this really is
a good example-- the bottom one is a very good
example of the October storm. And in fact, in
more recent years, we've rerun,
retrospectively, the October '87 storm with a modern
ensemble forecast from ECMWF, incidentally using
very high resolution models. So the things that
the Met Office thought were necessary to
kind of correct it. And what you actually see, from
the 50 so-called postage stamp maps, all started from almost
identical initial conditions, was that they had diverged
phenomenally after two days. This is a completely
exceptional type of situation, where you get
almost any synoptic weather type you can think. Here's two neighboring ones. This is over the UK here. So what was the
reaction to this? A lot of people said, this
is interesting theoretically, but completely useless, because
this is giving forecasters too much information. It's information overload. They'll never be able
to deal with that. And in fact, they said,
what we should do, if you're going to use
this type of technique, we should average these
50 forecasts together. Produce an ensemble mean. You can formally show,
actually, the ensemble mean over a large
number of forecasts. It actually has
a lower RMS error than the individual members. But it's pretty obvious
you're throwing the baby out with the bathwater. You're smoothing these 50 maps. You'll no longer have
a severe weather event. So this is a useless
idea, in my view. Rather, we need to synthesize
things in terms of probability. And this is a simple
sort of statistic you can get from these
50 members, which is a probability of
Hurricane force gusts on that morning
of October the 16. And these probabilities are
around, I believe, 30% or 40%. And given that in Hartforshire,
Herefordshire, and Hampshire, hurricanes hardly ever
happen, 30% or 40% is a rather large number
by climatological. Where is Sevenoaks? Well, Sevenoaks is just in Kent. So somewhere down here, roughly. So it's actually on that swath. By the way, the other
argument that people who say, well, you're
taking away resources should be used to increase
the resolution of the models, is also bogus. Because if anything, increasing
the resolution of the models is going to make this
divergence even sharper. It's going to expose this
instability even more. Get this to work. So we developed
ensemble forecasting. [? Yauheniya ?] and
Zoltan Toth at NCEP had a kind of parallel
program going on, and we both became
operational in 1992. Now I've got some sort of
technical stuff here, which-- I am running out of
time-- but I want to mention in the
context of Ed and Jule. If you introduce butterflies,
literally-- not literally-- numerical butterflies
into the model by perturbing grid
points with noise, which is spatially uncorrelated, you
don't see the butterfly effect at all. All that happens is
the model's diffusion on those scales-- numerical
diffusion-- just kills the perturbation off. So you have to be
actually quite clever to introduce initial
perturbations, which are going to have the growth
characteristics that you want. [? Yauheniya ?] and I
pursued slightly different philosophies. And I wish I had more
time to talk about. But we focused on thing
called singular vectors, which is actually very much
motivated by the work of Brian Farrell, who is a student
of Dick Lindzen, who's here. And Brian Farrell
actually did a lot of his work looking at the
Charney baroclinic instability problem, and analyzing finite
time growth of perturbations. And because these
are technically [INAUDIBLE] types
of problems, you can get, over finite
times, growth rates, which vastly exceeds the
long term exponential growth of the normal modes. Now the thing I
want to just mention is that I gave a talk about this
when Ed was in the audience, and he came up to me
afterwards, and said, this is really interesting,
and I've learned a lot from your lecture. And then some years later, I
was looking at his Lorenz paper in Tellus in 1985, and
he talks about pretty much the same thing. These singular vectors are
basically eigenvectors-- if you have a
dynamical operator A, the singular eigenvectors
of this product matrix. And in this paper, he's
talking about eigenvalues and eigenvectors of
this thing, giving you preferred configurations
in the error field. And one could choose only a
small number of these airfields from this calculation
for superposition. Holy cow, he's completely
trumped me some. Actually, that's a typo. That should have been '65. Sorry, that's '65. So decades before. And the other area where
we put a lot of work in to represent model error
is stochastic parametrization. And again, I gave
a talk on this. Ed said, oh, that's
really interesting, I've learned a lot
from that talk. And then I look back
at his paper in '75-- I believe that the
ultimate climate models will be stochastic. I ran the numbers, it
will appear somewhere in the [INAUDIBLE]. So Ed was an amazing character. I'm conscious of the time. So ensemble forecasting,
I think we've finally broken this brick wall down. And it exists on pretty
much all time scales these days, from ours,
actually, through to decades. This is a nice example of
tropical cyclones, which again, like the Lorenz '63 ensembles,
you can get very predictable ones, semi-predictable
ones, and ones-- that actually got
transposed by the computer-- but that's a cyclone
which it really doesn't know which way it's going. Conscious about the time. Just want to finish
with a couple of slides. I think the future-- this is very encouraging
from the point of view of societal impact now. And one of the
things I really think will be important in the future
is how ensemble forecasting can really now provide
objective criteria to decide, for example, whether
emergency disaster preparedness agencies can start
to be proactive. This is high [? end, ?] that
was the example, which I showed, which looked very predictable. And you think, well,
why aren't they going in there in advance
with their emergency shelters, and food, and so on. And the answer, of
course, is that if it didn't happen, if the
event didn't happen, it's very costly to go out. But with ensemble
forecasting now, you can define objectively,
decision theoretic criteria, if you know the cost
of preventative action, if you know much
how much loss you're saving by taking
preventative action, and crucially, if you know the
probability that the event will occur, you can form an
objective criterion for taking that proactive action. And I'm sure we'll see a lot
more of that in the future. So I'm finished with this slide. Just to say that I think after
quite some time of largely independent
development, the work of these two giants
of meteorology is really now
seamlessly intertwined. And it's for the benefit,
not only of our science, but society more generally. Thank you. So Dick Lindzen is on now. We can take one question
as Dick comes up. Anybody who would like
to ask a question? Gray? Back in the '80s, a lot of the
error in medium range weather forecast was really still
dominated by systematic error. And I wonder, as the
computers have gotten bigger, is that still the case, or has
this systematic bias gone away? The large scale systematic bias
has really gone down a lot. But of course, there are still
important systematic errors on smaller time scales. So for example, just getting
intense rainfall amounts correctly simulated-- models tend to
somewhat underdo that. So actually, having, to some
extent, won the ensemble war-- and I used to not be-- but having won the
ensemble war, I'm actually now a great
advocate that we should be putting resources into
increasing model resolution. So to get those last few
systematic areas right down.