DEVARAT SHAH: I hope you're
caffeinated and ready to go for the last and most
exciting panel session. Yes, so-- Yes, the
last one, please. OK, so we've got five
distinguished panelists who will make remarks. 12 minutes each-- that'll
be roughly an hour. And after that, the
panel will open up and we'll be joined by
all of the five panelists and our plenary speaker Emilio. So with that, let
me get started. I'm going to go in
the alphabetical order of the last names
of the panelists, starting with
Hamsa Balakrishnan. So Hamsa is a professor
of AeroAstro at MIT. One of the things, we were grad
students together at Stanford. And so a thrill
to introduce her. She was at NASA Ames Research
after her PhD from Stanford. She did her undergrad
from IIT Madras. She works on-- well, she'll
just talk about that. Let me sort of quickly tell
you about a few of the awards she received. Of course, NSF CAREER,
the inaugural CNA Award in Operational Analysis as
well as Don P. Eckman Award. Welcome to Hamsa. [APPLAUSE] HAMSA BALAKRISHNAN: Thank you. It's an honor to be here. So thank you, Devarat and
John, for the invitation. It is also truly an honor
to be in a session in honor of Sanjoy Mitter. So I'm going to start with
a very quick anecdote on how I first met Sanjoy. I had never been to
MIT when I interviewed. And some of you may know
it, but it's a little bit of an intimidating place. And I knew of Sanjoy
from all his papers. And you come in and, you know,
it's a two-day interview. The first day is
just rapid-fire, meeting after
meeting after talk. And the one thing
I vividly remember is dinner on the
first day with Sanjoy. And, you know, going in,
you would think, you know, I really thought
he was going to be one of the most intimidating
people I would meet. And yet, I just still appreciate
just how, you know, kind he was and how, you know, welcoming
I felt. And I really, you know, everything you hear
about how tough a place MIT is, you know, that first day really
helped me understand, you know, how nice it could be. And Sanjoy really
played a key role. So I'm very
appreciative of that. And, you know, I try to
do a little bit of that every time we meet new
people now, you know, to make them feel
welcome at MIT. So Devarat mentioned,
you know, when John said, you know, the
Transition Session, it's always a question
of what do we do. And then Devarat
sent instructions. And he said, go back. And I wasn't quite sure
how far back to go. I realized yesterday, John,
that really feedback control started in ancient Greek. So I could have
gone back that far. But being in the
AeroAstro department and talking about
transportation, I figured I wouldn't
go quite that far, just more like 100 years,
to the Wright brothers, you know, more than 100 years. And this is a talk that
Wilbur Wright gave in 1901. So this was two years before
the first flight, you know, of an airplane. And in the speech, he
said, "The difficulties which obstruct the
pathway to success in flying-machine construction
are of three general classes." It is what we teach all
our aerospace engineering undergrads. So Alan probably remembers
that from, you know, his time as an undergrad. The first is material. So how do you
construct your wings? And your wings need
to be designed such that you can generate lift. So, you know, they're
sustaining wings. The second has to do with
the engine and propulsion, so those which relate to the
generation and application of the power required to drive
the machine through the air. And the third was those relating
to the balancing and steering of the machine after it
is actually in flight. And then he went on to say,
of these two difficulties, two are already, to a
certain extent, solved. Right? So now the question
is, which two? So he says, men already
know how to construct wings on airplanes, which when driven
through the air can, you know, get a sufficient speed. You can generate the lift. It can carry the engine,
the engineer as well. We also know how to
build the engines and the screws of sufficient
lightness and power. However, "The inability
to balance and steer still confronts students of
the flying problem. When this one feature has been
worked out, the age of flying will have arrived, for
all other difficulties are of minor importance." You know, going back to
everything that LIDS does or at least starting
with the control, you know, it's
certainly involved with the details of
air transportation because the last
thing that they solved was how do you
actually control this. And, you know, it goes
back to a lot of the stuff that Emilio talked about
earlier today, as well. So two years later, they
flew and they actually did manage to control it. But, you know, human ingenuity
is such that you always want to do better. Controlling the
airplane was still hard. So about 10 years
later, the question was, can you actually make
it easier for the pilot? And so in the 1900s,
in the early part of that decade,
middle of the decade, Elmer Sperry invented
the gyrocompass. And his son Lawrence
Sperry decided that you could use
that to actually build the first autopilot. And he was 20 years old. He built an auto pilot. And he won the Aircraft
Safety Competition in France. And the idea was
actually quite simple. So you have a gyroscope
which is a heading indicator, you combine it with
an attitude indicator, and you can actually get the
aircraft to fly, you know, follow a compass, a
straight and level course. It makes it much
easier for the pilot. And, of course,
I have to say, he was also the inventor of
the flashy demo, right? Because it wasn't enough to show
that you could actually work. So in the competition, first
time you can see that picture, you know, look, ma, no hands. But he also asked
his French mechanic to go out and stand
on the wings so that everybody knew nobody was
actually handling the aircraft. Right? But this is sort of
where, you know, this is in a period of 10
years, but you already have some level of more
LIDS-type thinking on how do you actually automate
that at a very simple control level coming in. I'm going to jump forward
to about 30 years. And we talked about
this yesterday. So Project Whirlwind
in 1944, the US Navy approached Gordon
Brown and said they wanted to build a machine
that would actually help improve the way aircraft
are designed and aircraft, you know, training
occurs for pilots. And so the idea was that
you would have a computer and you would have
a joystick and servo mechanisms that would
connect to, you know, your control surfaces. And you would use
that so that you would be able to train pilots. We know this-- of
course, you know, John said the Project
Whirlwind was, you know, what we remember it
for is for, you know, the development of the
magnetic core memory. That's true. But it's also the
first flight simulator. Right? And so if you
think about it, you know, that sort of the
path of going towards-- you know, LIDS was
very early in coming up with ways of looking at how
you controlled, you know, one element of your system,
so, you know, one aircraft, and how do you do that. But by this time, you know, in
the '40s, '50s, and early '60s as control was
developing, air travel was getting quite popular, as well. So that's also around the
time that you actually had the problem of congestion
because there was enough people who wanted to fly. Now, of course,
you had, you know, this headline from
1968 which says the "FAA Urges the US to
Construct 800 Airports to Ease Congestion." We don't quite have 800
airports with a lot of traffic right now. But then, in terms of
what needed to be done, the methodologies and
the techniques that needed to be done to
support the analysis and support this sort of
not just one aircraft, but a lot of aircraft and the
collective behavior of these systems and the management
of these systems was also, you know, there was very
active work at LIDS, you know, Mike Athans, Sanjoy, in
combination, actually, with the Flight Transportation
Lab and OR Center with Amedeo Odoni and others. MIT Lincoln Lab was
involved in this, looking at things like the cockpit
air traffic situation display, so that, you know, there
was situational awareness for pilots. And also things
that, you know, have become much more common
areas of research since then, such as how do
you optimize these flows? How do you merge aircraft more
efficiently in terminal areas? How do you space
them, and so on? And interestingly
enough, I think I've worked on these problems. Emilio has worked
on these problems when he was in AeroAstro. But really the birth
of these problems, a lot of these
approaches, was at LIDS as early as in the 1970s. So there are sort
of these two themes that have been
consistently going on. So one that has to
do with, you know, a single aircraft or a single
vehicle, what do you do? The other that has
to do with the system when there are multiple
interacting agents. And, of course, the direction
in which a lot of this was going is, you know, in the direction
of automation and autonomy. So this is a headline that
we all dream of, right? So "Robot-Piloted Plane Makes a
Safe Crossing of the Atlantic," from Canada to England,
everything being fully automatic. And, you know, this seems like
a really neat demo to have now. Unfortunately, it's
been done before. This is a New York Times front
page from September 23, 1947. All right? You know, one autonomous flight
of an aircraft has been done. So why are we still
working on these problems? Clearly, you know, we don't
see autonomous cars everywhere. We don't see autonomous
planes everywhere. So there's two issues. So the planes, for now, I
mean, when I think of a drone, it doesn't look
like that, right? So there's still the question
of how do we do this reliably and at scale and repeatedly
in a way that it can actually interact with everything else
that's happening, you know, the human-piloted aircraft
and other autonomous aircraft. And they look like this, right? And this is, truly,
I mean, we've talked about the
commoditization of data. But there's also clearly a
commoditization of aircraft that's happening simultaneously. And so the challenges
again, now, have a lot more to
do with, how do you, you know, enable what this
revolution can do where you have more drones, consumer
drones being sold in a month than every registered
commercial-manned aircraft on the planet, right? So the order of magnitude
is much more different. So a lot of the challenges
now are going to, you know, this
issue of scalability and how do you do large
systems with multiple vehicles. And, you know, Munther
talked earlier today while he mentioned the
impact of cascading failures. I should point out
that we're actually seeing evidence of this. So since we live
in the Northeast and a lot of people in
this room fly around, I want to just
present some data. So you can look at
historically in the US, how many days do you
have when in a year or in a given three-month
period or season when more than 1,000
flights are canceled? And it used to be
one of those things that we would say the 2015
winter was really bad, right? And so that would
be the one time there would be large
numbers of cancellations over several days. But we've certainly gotten to
a point where this happens, you know, pretty consistent. All right? We have massive delays,
these are system-wide. And it keeps happening
all the while. And what we want to do is
be able to do this, plan the system with manned aircraft
better, with passenger flights better, with cargo
flights better, but also do it with a large number
of unmanned aircraft that we would expect to see. And so going very quickly toward
the future that is expected, this is NASA's view
of what the skies are going to look like in 2030. This doesn't actually include
any of those consumer drones. These are pretty much enterprise
drones operating at altitude. The blue aircraft are
the manned aircraft. The red aircraft are
unmanned aircraft doing all sorts of things
like surveillance missions out here and everything else. What we have right now is the
ability to optimize, right? So we've done a lot of
work on the computing side. How do you actually leverage
cloud computing resources to do real-time planning
on this, just optimization? But that's not going
to be the answer because we need to be
able to do this robustly, under uncertainty. We need to be able
to do this, you know, with multiple players. So issues of fairness,
issues of who are we serving, who are we not serving? Which communities
are being left out? How do we look at behavior? How do we look at security? How do we look at safety? You know, and so all
of those challenges that we've, over the course
of the past two days, been talking about, it's time
I feel for the next transition, right? So we did that for one
aircraft, and we moved there. And now we have the system-wide
problems to transition to. So with that, thank you. [APPLAUSE] DEVARAT SHAH: All right. Let me quickly introduce
our next panelist. Going in alphabetical order, the
next panelist is Richard Barry. Richard was born in LA. He's professionally focused
on systems and architecture, new technologies,
education, and philanthropy. In earlier career, he
was Assistant Professor of Electrical Engineering
and Computer Science at George Washington. He was also a member
of technical staff at MIT Lincoln Lab, co-founder
and CTO of Sycamore Networks. He has a long
history in industry. And I'm trying to
quickly sort of focus on a few of the
things, including just I forgot to tell
you, he's also a LIDS alum and a PhD from 1993. So please, welcome, Rick. [APPLAUSE] RICHARD BARRY: Thanks,
I'm glad to be here. You know, it's unusual for me to
be in this kind of environment again. And I've found, I've
mentioned to a few of you, that I feel like certain neurons
are re-firing that, you know, weren't firing before. I think David's talk
especially lit up a whole section of my brain. And unfortunately,
that section then said your slides are
all wrong, you know? No equations, no mesmerizing
blockchains moving around. But, you know, so I'll
do the best that I can. So I started in LIDS like 1988. And so that's
about 30 years ago. And at the time when I
started, Sanjoy was director. And, you know, in my view, LIDS
had sort of been here forever, right? And now that I sit here and
say, well, 80 years and 30, that means if you plotted
this between 0 and 1, I'm like at right
around 0.6, which is a different perspective. I don't know what that means. But, you know, it's
kind of an observation. And I thought, how am I
going to sort of present 30 years of optical
networking in, you know, two, three minutes when we get there? And I thought, well, maybe I'll
make some nice graph and that. People like graphs. And you can see things
happening with time. But I realized it'd probably
have to be, like, a log scale, and that was a little
depressing to me. So instead we'll just go
with a few points here. But just thank you to
Sanjoy for the leadership and also for
humoring me and doing a foundations of quantum theory
at the time when I was there. And Dimitri you was
my academic advisor. And Pierre, who I'm
not sure is here today, was my thesis advisor. And Bob was a reader there. So let's see here. Which one? The green. Ah, that's why it says green. So, you know, my time when
I was living all this stuff day to day, which I don't live
it anymore, was a while ago. And so just a few
acknowledgments of people that I called
to help bring me a little bit up to speed here. Good to have friends. So, you know, in
terms of thinking-- you know, so
roughly, like I said, since I started
working on this, which was when I started as a student,
a PhD student, you know, to now is about 30 years or so. And so being a little
loose with the time, I divided it up into, like,
the early years and then what's going on now. And I think there is a little
bit of an interesting story of what happened here. So, you know, at a very
high level, it's simple, is that you had a
new application, you had an old technology
that wasn't really fit to that
application, and you had new technology which was
available, at the time, to solve that, right? Of course, you're dealing
with infrastructure. You're dealing with things
that go in the ground. It's slow. You need a big impetus
to make that change. And there was an ecosystem
that existed at that time to do that, both in terms
of deregulation, which spurred a lot of innovation in
different companies, the rise of the ISPs. And it's different when you have
a customer going to a carrier and saying, you know,
give me the equivalent of a wavelength of
bandwidth versus the carrier internally deciding they
need to do something with that wavelength. And you had a lot of
money going into it. So you had like the bond market
willing to fund new networks and do all sorts of stuff. So you had a lot of wind at
your back to do these things. If you look at sort of the
details, it was the rise of IP, right? And not just IP, but the rise
of high-speed routers, where the speed of the
port of the router was approaching the speed
that you would carry on one wavelength on the optical fiber. And so the old technology
of multiplexing lots of little things together, voice
calls or private line calls didn't make sense
when you wanted to carry one thing on a
wavelength or maybe four things, even, on a wavelength. At the same time,
optical technology was literally exploding. At the time, you had wavelength
division multiplexing. You had optical amplifiers which
would amplify all those signals at once, as opposed to having
many repeaters for each signal. So you could deploy
additional capacity by just changing the endpoints
and not going into the middle. And the economics
during this time were amazing in that the
innovation of the optics was happening faster than the
innovation of the electronics. Not all the
electronics, of course, you needed that
at the endpoints, but of the router technology. So even though the routers
are getting faster, the optics was getting
faster, faster than that. And the economics were such
that people wanted, needed another layer in
between the routers and the physical network
itself, besides the fact that you had to carry legacy
stuff that still was not carried on IP networks. And, in fact, just anecdotally,
more than anecdotally, at the time, if it was more
economical if somebody bought an eight-wavelength
system, which they were at the beginning, and
they had only filled half of it up, they would then go and buy
a 20 or 40 wavelength system before filling up the
rest of that thing because it was so much
more economical to get your spectral efficiency
with the new system than to rely on your old system. And so things were just
moving very, very quickly. And if you look at what happened
sort of in the end, then, you know, the standards,
which we were not, the economics beat
the standards, and a new layer
emerged in the network to replace SONET
which had new framing. It had new forward
error correction, which actually was sort of
maybe an underappreciated at that time aspect
of this, the fact that you could buy line
speed for Reed-Solomon error correction starting
in about 2000. So that was actually
after we started, right? And this thing emerged. Now, if you look at
what's happening now, lately, coherent technology
and some other things that were done at
Lincoln years ago has become practical and commercial
and is 100 gigabits per second. And a lot of that has been done
by some people sitting right here and Pierre was
involved, componentized to be able to just go
right on a line card. And so in some sense,
the IP using WDM directly has caught up. So interesting enough, if you
go back to 1988 when I started, that was an architecture that
was being debated, right? You had sort of three camps,
if you would go out there that you would listen to. There was the
legacy camp saying, you know, this is
my standard network. You have to follow
all these things, otherwise you can't get in it. And we'll just make SONET
a little bit better. There was sort of our
camp that says, no, you know, take these wavelengths,
switch them around. Maybe do some multiplexing in
this new way, not in SONET way, in between. And then you had
what, in the end, was a forward-looking
camp saying, no, no, no, you
just have routers. And you just put
WDM on the routers. And that's kind of
where we are today. There's still optical
switching in the network. There is all optical
switching in the network, right, now, especially
in the metro area. You know, but that
didn't win, at the time. That lost at the time. It took a number of
years for that to happen. It was another cycle. And it was partly just due
to the different development cycles, the different
technology, and the economics that that just really
wasn't ready yet. So one of the questions
that was posed to me was, you know, how did LIDS
help in this transition and, you know, at
a high level, I guess I would say what were
the things that I take away from the LIDS education that
lasted through the years? And, you know, for me, I
think it was maybe the ability to work from a blank slate would
be one of the major components, right? You're trying to
do something new. Here's, you know, a piece of
paper, you just go and do it. The deep knowledge,
obviously, helped because you have to sort of
have some sort of guidance as to where to go, right? And, you know, maybe it's
confidence or ignorance to go and think that you can
do this in a new environment. What would I-- I
wouldn't change anything. Somebody's question is what
else would have helped you? So I don't know if I had to give
up softball time or something, for that, you know? I'd say that my experience
was that, you know, I can kind of summarize it as,
you know, LIDS is very deep. It was very, you know,
a lot of fundamentals. And it was, you know--
it's an essay question. And a lot of the
engineering that I faced there was
multiple choice, right? And so, you know, the ability
to make good, quick decisions, rather than great
decisions that take longer was sort of a skill I
had to sort of develop later, right? You know, other
things, too, I think, you know, maybe not
part of education, but the importance
of the ecosystem and the multidisciplinary
aspects, which appear to be
very strong now, of what it takes to
commercialize something. You know, that what
you're doing, your piece, your new piece is only part
of the whole puzzle of all the people that work within your
company trying to do something new, the people you're
trying to sell it to, and the people you're buying
things from to make it. And you have to interact
with all those people. I won't go over all
the points here. But-- You know, in terms of
what's going on now and the future, in terms
of optical networking, I think there is a, you know-- David's right,
the infrastructure constantly changes. Right now, we're sort
of mid-cycle in that. So people are innovating
and going faster and faster, you know, speeds and feeds. Probably, we're
mid-cycle, though, in terms of the new
architecture and to something new fundamental. There's still stuff
going on in networking. I don't see, you
know, a lot of it, but just a little
snapshot of what I have. It seems like, you know,
we've flattened the transport, we've flattened the hardware. You know, but we've expanded
the software layers, right? And if you look at AWS and if
you look at managed service providers, there's a
huge trend to outsource your networking, right? So if you're a commercial
entity and you're trying to sell to
somebody, you may be trying to sell to an entity
that doesn't have as much, really, a networking
department anymore, right? Because Amazon is
doing it for them. So, you know, and if you
look at Amazon's [INAUDIBLE],, of course, they build networks
and they have data centers and so forth, right? But, you know, part of their job
is to deploy lots of networks, not one network, right? There's thousands and thousands
and thousands of them. And sort of the traditional
problem that I learned here was, you know, flows sharing
a resource as opposed to many networks, many software
layers sharing a resource. That's not a new thing, right? Infrastructure has
always kind of done that. But it sort of maybe
exists at a huge scale now that didn't exist before. And so if you're going
to be able to do that, that's why you see
so much work going on commercially right now
with automating it, with monitoring it,
with analyzing it. Because, you know, you can't
manage what you can't measure. And then just the last things. Don't have enough time
to talk about videos, but coming from LA and
the fires, you know, just a little-- There's, you know, an
emerging layer here of AI between the cameras
and the people, right? And the fire in LA, the recent
one by the Getty, you know, a tree branch fell
on the power line. And besides the fact that that
branch should have been pruned, besides the fact that the power
line should be underground, the firefighter
learned about it when 911 was called by
somebody seeing the flames after the
fire had already spread. And there's really no
technological reason, at this point, why we can
not have early detection to solve that problem. Thank you. [APPLAUSE] DEVARAT SHAH: All
right, continuing along. Our third panelist
is Shashi Borade. So Shashi graduated
from IIT Bombay in 2002 with a Silver Medal. And that's my Alma
mater, so super thrilled to be introducing him. He received his PhD here from
LIDS working with Lizhong and Bob in information theory. He's one of those individuals
who would not write on his bio, but since I was
around and observing, he got a really exciting
academic job offer at one of the top universities. And instead, he decided
to go to New York because New York
is really exciting. And I was wondering, was
that the right choice? But maybe it was a great choice. He spent seven years at DE Shaw,
done lots of exciting things in that world. I want to sort of
go through that. Currently, he co-leads a group,
a hedge fund under Engineers Gate since 2015. All right. [APPLAUSE] SHASHI BORADE: Thanks, Devarat. Thank you, John,
for inviting me. This is a very
humbling experience to be among the superstars. And so many of the
mentors and heroes that I continue to look
up to are in this room, starting from Bob to Dave
and Professor Munther, all three of them were
on my thesis committee, to Emre, Lizhong, Devarat. And what is really amazing
about these people is, besides being an intellectual
powerhouse that they are, they have built these
amazing products that we can touch or text or,
in recent case, wear or run in. So that is something. So starting, as my first
LIDS exposure happened before me knowing LIDS through
what many of us have shared, my first EE class was Willsky
and Oppenheim's Signals and Systems. And until that class, I used
to think electrical engineering is electric towers,
power generators, all these transmission lines. And I was sort of happy
to know that I did not have to climb on those
towers or things like that. That's a true story. So that was nice. The second exposure was probably
the most fortunate things to have happened to me. I got to work with Emre in my
internship as an undergrad. And-- There's a
clicker here, OK. So I got to work on some really
cool information theory stuff. And Emre was, by far, the
smartest person I had met. And more importantly, he's the
kindest person I'll ever meet. And from then on, I sort of
decided that, when in doubt, think what Emre would do or did. In that case, he went to MIT,
to LIDS and did PhD with Bob. Emre was very kind
to also connect me to Bob who wasn't taking
full-time students anymore. But he was happy to [INAUDIBLE]. And I came in. And Lizhong had just started. And I became his first student
and Bob's second last student. And as you may know,
Lizhong is a student of David Tse who was
a student of Bob. So if you think about
the academic tree, I became the academic
uncle of my academic dad. So-- [LAUGHING] So a lot of fun
for 6 years in this directed acyclic graph
situation with the loop. And one being the first
student and the other case I was sort of last. It was very interesting to
have those perspectives. Got to interact a lot
with Professor Mitter, learned a lot from his breadth. And let's see, the
clicker is this, right? So people often
ask, what is common, what can be common
in LIDS and finance? And perhaps the reverse question
is easier, what is not common? There is a lot in common. Let's start with
the same giants. We both share some of
the same people who have done amazing, founding work. Shannon himself tinkered with
a lot of investment ideas. Apparently, he gave a
lecture in this class that people took notes on. But unfortunately, I don't
think it was published ever. But he came up with some
things and apparently had developed a machine to predict
the next stock price and so on. Kelly criteria, again, very much
LIDS-like person and Berlekamp, being second student of Bob. Some people may
know this, but he is the person who sort of turned
Medallion, perhaps the greatest hedge fund, around while
it was not doing well in its early years
and sort of really made it into a really
consistent return generator. And then sold it off
to Jim Simons, back. And Tom Cover, as we know, had
a bunch of interesting work on universal portfolios. And here's the interesting part
that many of you may not know, Professor Forney had done some
fairly interesting things, also in early 2000s in finance. And he has a patent on it. And later, many
years later, people ended up rediscovering
that, and so on, while not knowing that this
was rediscovered long back. So you can say
that two LIDS alums have created one of the best
generating returns ever. And I say this knowing
that if you backtrack what their strategy was, it
would have done amazing things. And I was talking to him
outside at lunch today. And it turns out he
came up with the idea without ever looking at data. "I just sort of intuited it. I just come up
with the formula." And it's just a genius. It's hard to imagine
that kind of intuition from someone who was
not even in finance. So that's LIDS alumni,
so no pressure. That's me with Elwyn
Berlekamp at a Shannon event a few years back. And I was basically trying
to tell him, like, dude, we went to the same lab. We had the same advisor. You were his second student. I am his second last student. You worked on error exponent,
I worked on error exponents. Like, we're basically
grad school buddies. So why don't you
tell me your secret and I won't tell it to anyone. And-- [LAUGHING] You probably know where it went. It didn't go anywhere. So a little disappointed. But I was OK. I still keep moving. Because maybe I got some
source from the mothership. And that's Marie and
Bob in their visit to our Alma mater, IIT Bombay. And indeed, there is
a lot of source code that sort of scraped
off that is very useful. So let's see what is it that
still helps and is common. That is easier to see. So one of the basic
things that is still useful in this problem as
well as LIDS-like problems is here, again, we
are trying to build theory for some
inherently really messy, complex situation. And you can't understand
messy, complex situation as it is, so you to try
to build smaller, simpler models to understand maybe an
aspect of it or some insight here or there. And you build up on these
insights one at a time. And hopefully, you can get a
picture of the elephant, maybe at least some parts
of the elephant. And Bob's formula
is very useful. Again, the art is, of
course, in building these nice models that are
insightful and yet tractable. And the other aspect
I find very useful is the value of a
multidisciplinary mindset, which, by definition, LIDS is. Some of the ideas
for us can come from information theory,
statistics, the usual suspects, of course, optimization. But often, they can also come
from psychology or sociology or economics or
sometimes even fiction. And that is really
interesting and fun. And another thing that
I really find useful is, what Bob told me
early on in my PhD is often it's good to have
the Shannon style, which is have multiple problems
floating in your head and you can sort of choose when
you get excited about what. And just don't sort of
keep doing one thing because you may get
bored or frustrated. And that is extremely
useful here, as well. Like, come up with
a bunch of things to work on and then some of
them may work, and most of them don't work. But still, you
will have something to look forward to and keep
going, making progress. And probably the
most important thing that I find valuable that I
learned at LIDS is probably something many of
you relate to, is after I came here, I
often found me being the numbers person in the room. And more curiously
or problematically, I started liking that fact. So I would start
finding more rooms where I am the dumbest
person in the room. And that was really fun. There was a group meeting
that Devarat, Lizhong, and Professor Mitter's
group had that was really exciting and so on. So this kind of masochist
behavior I blame to LIDS and I'm very thankful for that. It has proved very useful. Let's see, what is next. So three slides,
I stuck to that, except if we don't
count the header. Let's see, what did
I wish I did more? And this is in the
spirit of adding more presidential debate-like
nature to this debate. Because John and Devarat told us
make it a little controversial or something. So I'll try to be a little bit. And this is more in
the spirit of what I wish I did do more
than what LIDS did. So it's more talking
about my narrow focus than blaming it on anyone. The one thing I wish
I did more was I wish I played a little more with
actual data or more experiments of some sort. Like, maybe just
take an oscilloscope and see if the Gaussian noise
is indeed Gaussian or not. That would have been fun. And there were things
to do that around me. There were labs, there
were people doing it. And I didn't do it. So that's bad. Second, I wish I had learned
a bit more of system design and learned a bit
of more programming that Bob said is
apparently not good. [LAUGHING] So I had written
50 lines of code by the time I finished PhD. It was not smart. So if given a chance
to do my PhD again and if Bob and Lizhong
want another student, I'll do this differently
this time, a little. So let's see. What are the big
differences in our world and a lot of the statistics
research at LIDS that happens? Our data is really,
really dirty. And a lot of the great
things that work elsewhere don't work very well for us. Our SNR is more
10 to the minus 4, which is a percent of a percent. Warren and Bernie have their
percent of percent they love. We have our percent
of percent we love. And they're both
very problematic. So let's see. People often talk
about big data. But for us, the real
problem is often small data. Because life is often
non-stationary for humans and societies and
whatnot, markets. And times, they are a
changing, as Bob Dylan told us. So we are not in
this data pool where we can do really fancy, train
the machine to play chess by creating infinite
artificial data. We can't because we can't
create artificial market data. People do what they do. That creates
problems that are not as much addressed, so in
the small data situation, in low SNR. So the SNR is roughly
in the same level-- the only other field has that
SNR is astronomy, I think. Everything else, like
speech recognition, images, that is much better. So anyways, non-stationarity
is a problem. And physics and mechanics
and how machines we create don't change, but people change. That means we have
to keep learning with very limited or
time-varying environments. That's hard. Shorter research cycle, which
can be fun or frustrating based on your attitude. We don't often solve the problem
fully, but get the 80-20 rule. Figure it out, move on, maybe
revisit it after some time. And best work is not often
made public, unfortunately. Again, here, I think
this is Bob's fault. Because he keeps
telling never publish. Don't publish. Don't publish. So we don't publish. We don't publish. [LAUGHING] This is getting better. There's some better
stuff coming out. There's some shared
open-source software that is commonly used in Python
that has come out of finance and so on. So I'm more hopeful on this
than some of the general tools are still shareable, if not
the secret sauces exactly. So I'm more hopeful on that. It's coming quickly. Not looking ahead,
because looking ahead means as if I know what
is going to happen, it's more of a wish list. Of what I would like to happen. I would like to see more high
noise, small data research. And a lot was already
presented to me by Guy and Constantine that
seems very interesting. And some of the research
in cross-validation leaves a lot to
be desired for us. It's great, but it doesn't work
for the small data, high noise regime. Because [INAUDIBLE] don't
change, but life changes. You know, there's no training
data, there's just markets. And we'd really like to have
some more of a detector which sort of says this is an
overfit, this is not an overfit. And it's not there yet. Causation versus
correlation-- things which allow you to do things
causally and make sure it is causal so that
it's not overfit. It's, again, a confusion
often that is not clear. I'll leave this. I'm running out of time. And game theory maybe to
understand people's motivations better and predict
better how they will act. And while we are wishing,
why not shoot for the moon? Maybe there is a grand unified
theory that John talked about and Munther little
bit talked about, based on like all LIDS stuff. And maybe we'll
figure it out together with some people in this room. Thanks. [APPLAUSE] DEVARAT SHAH: All right. Next is-- let's see,
R comes before S. OK. Next is Tom Richardson. So I introduced Tom earlier. So I think I will sort
of give him more time, and then give us his remarks. THOMAS RICHARDSON:
Thanks, Devarat. The instructions
for the meeting said that we shouldn't interpret
transitions loosely or broadly, which I thought I did. But I really can't
compete with Devarat. It's just too much. So I think I should
explain the title. There's two meanings. One is, I think, probably
pretty obvious and pedestrian. We have the G transitions in
wireless, 2G, 3G, 4G, 5G now. And the other interpretation
is an expression of some residual
frustration that I have for a transition that
didn't happen, or almost happened, but actually
did not happen. And I actually
thought I would take this opportunity of
coming back to LIDS to get it out of my system. So I hope this
accomplishes that. OK. OK, so maybe it's a
bit self-centered, but I thought I'd just talk a
little bit about my own case. So in 2009 at that Paths Ahead
conference Roger Brockett said that LIDS-type work,
I mean, he just called it systems
theory type work is kind of meta-engineering. And I think the point was
that it's highly portable. So he was trying
to say, you know, we should move into other areas. But the central point was that
it was a very portable skill set. You can move from
place to place. And that certainly
happened to me. So I did a master's
degree in control theory. Came to MIT to work with one
of the most famous control theorists in the
world and ended up doing a PhD in computer vision. Then left MIT and went to Bell
Labs in a communications group. So I was working. And actually, at
that point, I still didn't really have much
information theory knowledge. I was working on a
project for data storage based on holography. And then turbo
codes were invented. And I started
hearing about them. There was a talk about
turbo codes in Bell Labs, maybe 1995 or so. And I went to the talk. And the reason I
went to the talk was because we needed
some good error correction code for the holographic
storage systems. And they said, oh, it's
this new big thing. And I went to the talk. And I thought they
were very interesting. And it wasn't really for any
information theoretic reason. I just thought they
were interesting from a dynamical systems
perspective, which was my background. So then I started thinking about
them from that perspective. And that's how I
got into coding. And a lot of people have
mentioned this already during the meetings. But I thought that
I'd be remiss if I didn't mention this transition
of Bob Gallagher's thesis. So what I've shown here is
the Google Scholar citations from 1960 up to now,
year by year, of this. And you can see there was
this transition somewhere around the late '90s. And basically, I was
catching that wave, as well. But it's still one of the
most remarkable transitions for a thesis you can imagine. You can see it's
starting to tail off a bit in more recent years. But I think some
of that, at least, can be blamed on Erdal Arikan. OK. So yeah, I wanted to talk
about the Flarion story a little bit, 2G,
3G, 4G transition. So the company was
founded in 2000. I certainly don't want to
give you the impression that I was the main
person behind it. The main creative
spirit was Rajiv Laroia. He's not a LIDS alum, but
he certainly fits the mold. He was a graduate of the
University of Maryland. And the founding
of the company also coincided with this
transition, which is the bursting of
the dot-com bubble. And that was a bit sad for us. We had these hopes of
getting rich quick, and they quickly evaporated. But, you know, it
builds character. So it's OK. [LAUGHING] All right. So what was the idea? So actually, Flarion was
a very audacious concept. I mean, the goal was, actually,
to be the next generation of cellular. And it was very audacious. And the basic premise
was that there was a mismatch between
existing cellular networks and the internet. And this mismatch was not
being corrected fast enough. And there was an opportunity
there kind of at the disrupt. And what it boils down
to is, so the reason why this was happening
is because, you know, the phone network and
cellular and the internet had very different DNA
and different structure, different architecture. So, you know, here are
some of the points. The internet, of course,
was computer to computer. It grew as a heterogeneous
set of networks that were connected
together, packet switch, TCP/IP flow control. On the phone side,
the whole system was created for people to talk
to each other person to person. It was a circuit switch network. Not too long ago, the whole
thing was owned by one entity. And so it had a very sort
of centralized structure. And cellular, at the
time, was essentially designed to extend
that network, you know, to go over the
air for mobility. OK, so CDMA was the
dominant technology. For 2G, there was CDMA and GSM. And worldwide, GSM
covered a larger area. CDMA was in the US and
Japan and South Korea. But they were essentially--
the original CDMA system IS-95 was really conceived
to be a voice system. And a lot of the architecture
and the mechanisms that were put in there were
trying to take advantage of many great-- there are many,
many great ideas, of course, in the CDMA system. But really, it was predicated
on this idea of being a digital voice system. And the things I
listed on the right here are some of
the characteristics, defining characteristics of
that technology, according to, you know, Viterbi. So you can see that, you
know, the big part of CDMA was the universal frequency use. It required very
fast power control. But that's not really an
issue because of the nature of the voice call. So in a voice call,
you make a call. You're connected
for, you know, end to end for a reasonable amount
of time, and then you drop. So during that period, you know,
the activity and the traffic is all relatively predictable. And the statistics
vary, of course, but not so dramatically. OK, so the internet
was very different. And, of course, the main
thing is the statistics of the traffic and
also the requirements. For data, you require, you know,
zero error rate, essentially. And for voice, you can
tolerate something. So you need to make it reliable. And at the same time,
you have to accommodate all these different
types of traffic. Much more entropy in the
demand for the resources, right, on the air. And so the basic question is,
should the cellular network, the existing cellular
network evolve to carry data or should the internet, as
is, go mobile at the time. And so the answer from
3GPP, for example, or 3GPP2 was that we should
take our network and evolve it to carry data. And the Flarion
proposition was, no, let's take the internet
as-is and make it mobile. OK. OK, so, I guess, I
suppose so one thing I thought I was supposed
to do was show how this was a LIDS problem, right? So these are a set
of questions that come up almost immediately
when you try to do this. I think there's some
typos, but anyway. How do you provide
for rapid transition in varying sized traffic loads? So with voice, the traffic
load stays simply constant. I mean, there
isn't a large range of traffic load requirements. So how do you provide for
varying amounts of traffic? How do you deal with the widely
varying demands of traffic and large signal dynamic range? So in CDMA, what
you do is you power control all the devices to
come in at the same power to control any interference. But in data, you might not
necessarily want to do that. You might want to take advantage
of somebody who's very close, seem to get very high data
rate and save power and so on. So how do you make an
inherently unreliable link look reliable to TCP/IP? And how should you
hand off an IP network. In CDMA, you use soft hand-offs
to connect to multiple stations to maintain connectivity. Is that the right thing for data
traffic, given the statistics? How do you manage the resources? How do you schedule? How do you protect the
battery of the mobile device? What's the right state space? And the synthesis
part of this is how do you do all
this simultaneously? So I think all of these
are LIDS questions. I realized what I
forgot to do was go back and check that all
the answers are in Dimitri Bertsekas' books
or maybe Bob Gallagher's book. Maybe I'll just mention one. So the key idea of making TCP/IP
reliable over an inherently unreliable link
is to use the fact that, say, on Gaussian channels,
that feedback, although it can increase capacity,
it can dramatically improve the error exponent. So you can quickly get
reliability, provided you have a closed loop. OK, so that's
definitely a LIDS idea. OK, so just quickly,
what happened, well, naturally, you know, when
you try to disrupt something, there's going to be resistance. So, for example,
infrastructure incumbents didn't really want to see
their networks replaced with IP networks. Qualcomm, of course,
had a big stake in CDMA and didn't want to see,
you know, that eroded. So the customers that were
available to us are operators. And operators tend to be a
pretty conservative bunch. Nextel is a bit of a
maverick, and they were serious about the technology. But then they were
bought by Sprint in 2005. Intel was pushing WiMax. They didn't really know
what they were doing. Qualcomm knew much better. And actually, I always
thought that Intel should have bought Flarion. But Qualcomm knew that
far before Intel did. And so they bought
the company in 2006 and reset their IP
generation for 4G. OK. OK, there was a comparison. This would say how good
was the Flarion system. This is just a
paper I pulled up. There was a
deployment in Finland. And this is a comparison between
the Flarion system and HSDPA, which is a, like, 3.5G system. You can see the comparison. And the numbers
are very precise. But the conditions
are a little vague. First of all, they were
at different frequencies. And maybe one thing to point
out is the secret to the Flarion design was to put in a very
agile control structure. And you can see the effect
there in the latency. And so this compared. Their conclusion was that
each system has its advantages and disadvantages. So HSDPA is a little better. But what wasn't
mentioned here, I think, was that HSDPA is a
5-megahertz-wide system and FLASH-OFDM was
only one and a quarter. [LAUGHING] OK. So I'm actually
almost out of time. But I thought, you know, what's
happening, what's critical now is 4G, 5G. And there's a lot of
opportunity for LIDS, right, I think, not so much
maybe in the core network or in the standards part of
it, but in the applications. So people think there's a
wide range of applications, a lot of new verticals
going to be opened up by 5G. And I thought I'd just quickly
talk about one of them-- industrial IoT. So the idea here is that you're
going to go into factories and you're going to
put it in a network. It'll largely look like
some high throughput network maybe in the ceiling
with radios up there. And they'll talk to all
the devices in the factory, including robots and so on. And what's interesting
here, I think, is that it changes
the game a little bit in terms of what you want
out of a wireless system. So in cellular, the
main thing is capacity. You want to provide as many-- But here, you really need
to provide reliability. So outage capacity, if
you like, or the ability to ensure high reliability
with low latency is key. So design will have to change. And in this case,
there's an incumbent. So, for example, there's
industrial ethernet standards which are adaptations
of ethernet to provide real-time reliability
with very strict latency controls and so on. And essentially,
what the 5G system is going to have to try to
do is displace those things. In order to do
that, they're going to have to meet the same
kind of requirements. And it's tough because of the
nature of the wireless link to meet those requirements. But like any kind of
transition of this sort, typically, the first step
is some kind of replacement. But if it happens,
it'll change the game. Because now you'll have
this centralized network with full visibility
into the whole factory. And you'll have new schemes
to try and represent what's going on in the
whole factory floor at all different time scales. And so that may change
what's possible even to do there, which is
another possibility for LIDS-type research. OK, that's it. Thanks. [APPLAUSE] DEVARAT SHAH: All right,
last but not least, our last panelists who will
speak, it will be Sri Sarma. Sri is also LIDS alum. She received her PhD from
LIDS under supervision of Munther in 2006. She's a controls person who
does a lot of exciting things at the interface of systems
biology and neurosciences. She's currently a
Professor at Johns Hopkins. And there are a
number of awards. And I'm trying to figure
out which one of them should I read. Well, one of them is,
of course, the PECASE. She's received-- OK, I'm just
going to sort of read one more and then give it to you-- Robert Pond Excellent
Teaching award. Welcome, Sri. [APPLAUSE] SRIDEVI SARMA:
Thank you, Devarat. So yes, I just
wanted to comment, since everybody was speaking
about their experience at LIDS. I still feel confident that
I was here in the golden era. So what do I mean by Golden era? Emilio was my contemporary, and
I'm sure he'll agree with me, the golden era is
you come in here, I came in as a graduate
student at EECS. I'm taking DSP with Alan
Oppenheim I'm taking 6.341 with Drake, then later
had the privilege to TA with Dimitri Bertsekas using
your book by Tsitsiklis. I took Nonlinear Systems
with Sanjoy Mitter, I think you were co-teaching. And, of course,
with my first TA, Nicola Elia, according to
him, he was my best TA. And I was taking Multivariable
Control with Michael Athans. And finally, Linear Systems
Theory and Robust Control with Munther. I don't know if any of
these legends teach anymore or teach these courses anymore. So that's why I think it was--
well, part of the reasons why I think it was a golden era. It was incredible. Another reason is
when I joined LIDS, Sanjoy Mitter was the director. And since then, I still haven't
seen a leader or a director of any type of
institute or center do the following, which
is what Sanjoy did. He would walk the halls. This is building 35. I'm sure he did it
in the second floor. I think we were on the
third or fourth floor. We'd be like here he comes. Here he comes. You know, the doors are open. And he walks the halls. And it wasn't intimidating. Because all he wanted to
do is stop you and ask, what are you doing? You know, what are you
working on right now? And it was incredible. I probably think you
knew everybody's name. And you probably knew
what everybody was doing. So that was a really
unique experience. And then finally, I
joined Munther's lab because I was very
interested in controls. Although, I think
you do know I had some passions
outside of controls, in particular, neuroscience. But I was doing
what you told me. Like, he would say, OK, you have
to take dynamic programming. Every controls
person takes that. OK, I'll take
dynamic programming. Then you move on. OK, you've got a minor in
math, so start with real and I'll say, OK. Topology, OK. I don't know if he knew this,
he ultimately found out, but I was also taking
introduction to neuroscience. I was taking motor
systems, neurophysiology. I was in the offices of your
great neuroscience colleagues like Suzanne Corkin,
who recently passed, a few years ago passed away. And I was indulging
there, as well. And you mentioned
independence and the freedom that LIDS provided. Well, that's what was incredible
because I don't think you know this, but that same time, I
actually wore the pink coat, and I was a volunteer
at MGH in the ER. Because I wasn't sure. I was interested in
neuroscience, neurodisease. Anyways, finally graduated. And the last thing I want to
say about the LIDS experience, it ended with an interesting
ending in my thesis defense. So my thesis committee was
Sanjoy, Sasha, Megretski, who I haven't seen here today. Was he here yesterday? And Munther. And I go through my defense. And my thesis was
purely control. There is no neuroscience. It was control under
communications constraints. And I was kind of
disappointed at the end. And I mentioned to him, like, I
don't know, Sasha fell asleep. [LAUGHING] And he goes, no,
that's a good thing. That means there was nothing
wrong with your problem formulation and nothing
wrong with your solution. And if you know Sasha, you know
exactly what I mean by that. So anyway, so that was
kind of my experience. And then since then,
I really transitioned. And transition
here was immersion. I went right into a
neuroscience laboratory, where I learned neurophysiology
and neuroscience for several years here
at MIT with Emery Brown. So here, let's see. So I absolutely cannot talk
about neuroscience, as a whole. You know, if you go
to the CDC, Conference on Decision and Control, maybe
there's about 3,000 people now? I don't know, in my days, it
was about 1,500 to 2,000 people at this annual meeting. If you go to the annual meeting
for Society for Neuroscience, it's 30,000 people. It can only occur in one of
three cities in the United States because there's
no convention center that can accommodate this community. So you can imagine the
complexity of the field. So I'm going to just touch
on certain aspects that are relevant, I think,
at least to LIDS. So I'm going to start with sort
of what LIDS-like people have been doing and contributing into
neuroscience for the last 20 years. And I'm going to try to
highlight an opportunity and where we can actually, I
think, make big contributions. OK, so the first is what I
call phenomenological modeling. And what this is really about
is not probing the brain, but still trying to
understand the underlying architecture in the brain
that governs behavior. So let me give you a specific
example here in motor control. Right? So we make all kinds of
nice coordinated movements. So if I want to study how
does the brain actually compute so that I can make
these smooth reaching motions, what I can do is I have a
behavioral neuroscientist just run an experiment where subjects
see target cues, maybe cues light up, and you just move
your arm to those targets. And you actually
capture the behavior. You capture my motion,
my trajectories, either in 2D or 3D space. So you have this sort of
input/output behavior. You have a target
going in, and you have a trajectory coming out. And then what people who
train like us, modelers, we understand this
is a feedback system. Right? We move, and as we're moving,
we get proprioceptive feedback, we get visual feedback,
and we continue adjusting our movements. And so what you can do as
a modeler, in this case, you can now say, OK, this
feedback control system is an interconnection
of subsystems. Each subsystem is some
region in the brain. OK? And now if I put these
dynamics in these subsystems, I can match this
input/output data. And that might be
somewhat trivial. Because if you look
at these trajectories, they look like responses
of second-order systems. But what's challenging
about that is, I said, understanding
neural architecture, which means each one of these
boxes is in some brain region. There are certain types of
neurons in that brain region. And if I say there's
an integrator there because that's what matches
my input/output data, then there better
be elements, neurons that can process in a way,
be connected in a way such that it can integrate signals. So that's the real big challenge
is being neural anatomically consistent and then
being able to explain, this is how we think
we control movements. The second is another end
of the spectrum, which I call mechanistic modeling. And actually,
interestingly, it's the physicists that have
moved into this field. I wouldn't actually say
LIDS-type people do this. But they do under a
certain situation. But essentially, if you just
build these mechanistic models, what you're trying to do is
be more realistic, right? Neurons are these
electrically excitable cells. They have membranes. You have ions going inside
and outside these membranes with gates opening and closing. And then you have these action
potentials, voltage potentials across the membrane. And every now and then, if you
have enough of a perturbation-- whoops-- enough of a
perturbation, you'll get what's called an action
potential this peak in voltage. And these are sort of what
we call spikes in the brain. And these spikes
move and transfer to other neurons and
that carries information. So these models are really
high dimensional, nonlinear, ordinary differential
equations, if you think the neuron is a point. There are PDEs if you want to
keep the structure of a neuron. And one neuron will
have five to ten states. And so you put 1,000
neurons together and you're really in
a ridiculous, high dimensional space. But they're useful in the
sense of understanding mechanisms of action. What happens when
you have this disease and the connectivity changes? So they can explain
a lot of things. What, more recently,
the controls people have been doing when they work
with these types of models is bring in reduction. You know, they do model
reduction techniques. And now instead of
having to simulate thousands and thousands of
times to understand something, they try to reduce it to
where they can still answer the question of interest. And it's all done
through analysis, so avoiding all these
mass simulations. And finally, I
think this is more where the control engineer,
signal processing people come in because it's a
very hot field called brain-computer interface or
brain-machine interface type system. So the idea here is you put
electrodes inside the brain. You record signals
coming from the brain. You do some signal
processing and modeling to try to interpret the
intent of the subject. And then you take that
interpretation and actuate a prosthetic device. And the prosthetic
device could be an arm. And you have some
visual feedback. And this can run in closed loop. OK? So that's pretty much
what's been going on in the last 20 years. But I think there's been a big
unmet need or an opportunity where a LIDS-type viewpoint
can make a huge contribution. And I'm going to sort of lump
this subfield as almost more basic, traditional neuroscience,
so basic brain science. If you ask, what is
a neuroscientist, what's their purpose? They're going to
say, OK, I just want to understand what brain
region or network of regions control behavior,
what's their function? That's the fundamental
question that they're after. And if you think
about the past, they used to probe the brain, OK? Typically in mice or
non-human primates, they'd put electrode
wires into the brain to record on the order
of tens of neurons, OK, while that
subject is executing some type of
structured behavior. So they'll put the
electrode in, say, primary motor cortex while
a subject or a monkey is making different
kinds of movements. They record the activity. And their job is to relate the
brain activity to the behavior. OK? And a big problem that they used
to encounter is the following. I probe monkey one, I record 10
neurons from monkey one moving up and down. And I get some activity
and some recordings. I do exactly the same
thing for monkey two and same types of movements. I hit the same region, not
necessarily the same population of neurons, and I
gather that data. And the traditional
analysis is correlations. Let's correlate
brain to behavior. And what you may
not be surprised at, because we are sparsely sampling
this region, 10 neurons out of 1,000? Good luck, right? You're going to see
very different responses across the monkeys. So what did the
neuroscientists do? They still published
papers, right? So what did they do? They say this is the trend
we see in 30% of our neurons. OK? All right, so fundamentally,
it was a big problem. OK, present-- well, we
have fixed the sparse data to some degree in that we
have new technology that can record from thousands of
neurons in behaving subjects. So that's good. Instead of tens,
I have thousands. Now we're sort of in this
big data, not really, but some people
claim, biggish data. And a lot of machine
learning people have come into the
neuroscience community. And it's been a big
deal in the sense that there's a lot of money
due to the BRAIN Initiative. And NIH has lots of
money for new ideas. And when you pair
a neuroscientist up with somebody who says they're
going to do machine learning, that seems very attractive. The problem is they
actually do a good job in capturing all the
variability between subjects. And even though
the neural activity looks different in monkey
one and monkey two, they can build a
deep enough network to capture all the inputs
and outputs observed across the animals. The problem is, of
course, that does nothing for the neuroscientist
in terms of understanding brain function. OK, that was the purpose. That is what they're after. So here's, I think,
where the opportunity is today and in the future. We still get the data. We have thousands of neurons. But instead of trying to
build these highly complex, you know, deep networks,
neural networks, whatever, why don't we simplify? OK. Maybe we have thousands of
neurons, hundreds of neurons, and we're trying to
hit the target region. So if I'm studying
motor control, OK, if I'm studying
movement, the neuroscientist is going to put electrodes in
the motor areas of the brain in hopes of capturing all
the relevant neurons, OK? But that doesn't mean that
you capture everything. Because you all know
when we make movements, we can never make the same
movement twice, right? And let's say you're playing
basketball and you're shooting and you've got an audience
and they're cheering for you, you're more likely to maybe
hit those baskets than if they were booing for you. So it's not just the motor
regions that command motions. A lot of other
factors play a role that the neuroscientists
aren't even probing, like how confident you feel,
whether you feel good or not, whether you're motivated,
whether you care. All of these are happening in
other parts of the brain, not the motor areas. And yet somehow, they're
talking to your motor regions to change the way you move. So how do you get that? Through dynamical models, right? In my experience, simple,
linear models work, OK? So what do we mean by this? Let's just take a simple
state-space model. OK, I don't know I have this. Here we go. Simple state-space model,
this is one example of how to do this, right? So the state is just the brain. What do I mean by that? It could be populations of
neurons and their firing rates. So let's say I have
100 populations, and it's X is 100 dimensional. U is the stimulus, right? Remember, say, if it's this
motor control experiment, I'm flashing a light
that tells me, oh, I need to move over there, that's
my stimulus, I move over there. The neurons respond. They're in my state vector. And I can now relate
behavior to the states. Now, what I'm not saying
here is, remember, this is sort of brain. Some of those states
can be measured. They are, right, if
they're in the motor areas. But what about all
those other factors that I just talked about? Well, those are things
you can capture, because they're latent,
into these state variables. OK, with some sort of
intuition or knowledge of how these states
evolve, you can construct these kinds of
models and actually capture the variability. The fact that you moved this way
on trial one versus another way on trial two for
the same input, you can capture that through
the dynamical state. And then, of course, if you've
got these simpler models, you can use them for control. Now, what's really important,
if you think about that state space model, is the type
of inferences you can make. What can you tell
the neuroscientists at the end of the day if you
can build a model like this? You can tell a neuroscientist
at any given time which regions, which neural
populations, these are the different states, right,
are playing more of a role during movement
or during behavior at any given point in time. You can also tell
the neuroscientists whether they're
interacting, coupled or not, or are
neural populations within the same region
more tightly coupled. That all comes from
the model, right? And then at the end, how
is this changing behavior? And this is incredible
because these are exactly the fundamental questions
that they're after. So I think there's a huge
opportunity for that. Thanks. [APPLAUSE] DEVARAT SHAH: All right,
so clearly, finally, I understood John's wisdom,
autonomous cars, then managing aerial traffic, to
optical networks, to finance, to the Gs,
and finally the brain. So this is extremely
exciting for all of us. I have a bunch of questions
that I have prepared. But I'm also looking
at the clock, and I don't want to
take away all the time. So first I'm going to open
this up to the audience. And if the time permits,
I will ask questions. Well, he is going
to be difficult, but I'm pretty sure my
panel is very well prepared. [LAUGHING] AUDIENCE: Thank you
for your amazing talks. So having talked about big
transitions in aviation, optical communication, finance,
wireless communication, neuroscience, and AV. So I think control theory
has played an important role and will make contributions
in the future. From one perspective,
however, except finance, all these fields are
less competitive in terms of participating agents. For example, on the contrast,
in biomedical field, like cancer and infectious
disease, the cancer stem cell and the pathogens are
self-evolvable agents. So in this case, how might
control theory or information theory can contribute to
these competitive fields to manage the kinds of
treatment under control? [INAUDIBLE] RICHARD BARRY: I
didn't get it either. THOMAS RICHARDSON: I don't know. But I'm pretty sure
it's not for me. So I'm gonna-- RICHARD BARRY: Did
you understand it? THOMAS RICHARDSON: No. SRIDEVI SARMA:
Yeah, I could try. DEVARAT SHAH: Please. SRIDEVI SARMA: So I
think, if I understood, so the example
you gave, OK, I'll interpret this cancer analogy. So you said a cancerous
cell versus some other cell. So let me give you an idea
where maybe control might work in a similar setting, right? So when we develop-- I'll talk about the brain--
when we actually develop, cells have no specificity. At birth, right, or in
the womb, you start off with a bunch of cells,
millions, billions of cells that divide, then
ultimately specialize. These become liver,
these become this, this is gonna go
to brain, right? And then in the brain,
they start specializing. OK, this is gonna be hindbrain. This is gonna be this
structure, this structure, and this structure. So the biomedical
scientists actually try to understand all
the interactions that have to happen chemically
and physically for cells to move and specialize. Now, sometimes things
go wrong, right? At birth, you might
have a defect where the cell doesn't even develop. So you can imagine, if you
can understand the steps and processes, you
can inject a stem cell and control how it behaves
with its environment, right, through feedback and so forth
to make it develop normally. There's one. Does that-- at least, biology. AUDIENCE: [INAUDIBLE]
another [INAUDIBLE] of-- [LAUGHING] --cell mutation is stochastic. Yeah. [INTERPOSING VOICES] DEVARAT SHAH: Andrea
has a question. AUDIENCE: I don't need a mic. DEVARAT SHAH: You
don't need mic. [LAUGHING] AUDIENCE: So on
the neuroscience, I think there's a
parallel to what we've seen in machine learning
in deep brain stimulation and basically broader
aspects of using closed-loop control to stimulate nerves
and stimulate the brain where we don't understand, for
example, why the deep brain stimulation works in Parkinson's
and doesn't work all the time. Sometimes it works. I was having a conversation
with the CEO of Medtronic and I said, you
know, what I find interesting is we have
no models for the brain. So that's why we can't build
the control systems well. But we're building them
and they work anyway. So I guess the question--
but not all the time and we don't know why--
so the question is, to me, neuroscience is a
fascinating and important area to apply closed-loop
control, but how do we do it when we
don't have the models for the brain to build the
control systems on top of it? SRIDEVI SARMA: So this was
something I spent my first six, seven years really focused on. So one, there are models out
there that are mechanistic. And so detailed that they
contain thousands of neurons and all the structures in
what we call the motor control circuit, including
basal ganglia, that affects
Parkinson's disease. Now, you cannot use those
models for prediction, for real-time
control, as you said. But what people are doing now,
and these are controls people, is they're taking
these detailed models and treating it as
a virtual brain. And then simulating and looking
at population level activity and then applying simple linear
models, linear time varying, whatever. But there's much simpler
models describing phenomena at more of a population level. And there are
publications and research out there that show model-based
control, feedback control. The thing is, the trick
is actually implementing. It's not necessarily hardware. Medtronic has this
closed-loop hardware. But it's the idea that, OK, if
this is an optimal controller I design, it's not clear that
the device can generate that stimulation signal. Because usually these are
pulses, periodic, aperiodic. The controller is
spitting out continuous unless you constrain it. Or it may not be
safe for the brain, depending on what
you're trying to do. So the models are out there. It's just not
ready to translate. AUDIENCE: Just one
follow up [INAUDIBLE].. So for example, diabetes, you
can measure glucose levels and do injection. But what you're reading
or your physical activity, the soft data that
we don't necessarily know how to model in control
systems plays a role, as well. And I think that
applies to finance. It applies to some of
the applications that are interesting for 5G. So how do we take kind of
these mathematical models and the very
rigorous mathematics that we've applied to them
and bring in soft data and interpretable
data for control? [INAUDIBLE] DEVARAT SHAH: I'm
moderating, so I'm definitely not taking the question. [LAUGHING] SRIDEVI SARMA: So
let me just ask. So what do you
mean by soft data? AUDIENCE: So take
diabetes as an example because that's the
most concrete one. So there, you inject
insulin based on the sugar level in the blood, right? And what you're trying to do is
keep it within some boundary. And that's what the closed-loop
diabetes systems are doing. But what you're eating
and when you're eating and whether you're
exercising or not and even some of the
things you were talking, about these outside stimulus,
do you feel good or not, has an impact on, you know, how
much you should be injecting. And there isn't a
good understanding of how to take the
hard data, which is what is your
actual finger prick amount of sugar in your blood,
with these other things. And so I think it is related to
the question you asked about, you know, when somebody
is cheering for you or booing for you, how
does that affect it. So what I mean by
soft data is data that we don't
understand its impact on this closed-loop
control system, but we know it has an impact. SRIDEVI SARMA: Right. So I'll give you-- where's Rose? Rose Faghih, I'll put a
shout out to her work. So yeah, so think
of the case you said exercising or maybe
things will be different if you're under stress, like
cognitive stress or something. So people are, and this as
part of Rose's program, is, you know, can I have a
wearable device, you know, that just measures,
say, my sweat levels, and then from that,
use models to estimate the underlying cognitive stress. That's your state. And then you build
a model of state with the actual explicit
measurements, the hard data, I guess, what you're
talking about, to refine your model to capture
sort of those variations based on various levels
of stress or exercise. And so I think this is
what's happening now. DEVARAT SHAH: Other questions? AUDIENCE: [INAUDIBLE],,
do you really need to control-- so in
all of these systems which are biological systems
and not man-made systems, I'm not sure whether at this
point we can really control. Or, you know, maybe if we're
within a band or some trends, that's all what we
can hope for and we should be happy with that. Because yes, there
are no models. We don't know all of the
inputs and all of the-- you know, that
affect the system. AUDIENCE: Yeah, I mean,
just take the diabetes as an example. It's been life-changing that you
have these closed-loop control systems where people are not
gonna die because they're not going to go outside of
the ranges of what's, you know, life preserving
for their blood sugar levels. So I think that that is a
really important application of these control
systems applied outside of the traditional engineered
systems to biological systems. AUDIENCE: I mean, in diabetes,
isn't it that what's important is more the trend, as
opposed to the numbers. I mean-- AUDIENCE: No, [INAUDIBLE]. AUDIENCE: If it's
going up, then I want to slow that
trend, you know, whether it's 20,
25, or 30, you know, there's so much that I can do. I mean, if I see it going
up, ramping up quickly, then I want to sort of
slow it down and bring it back down, right? AUDIENCE: There is a range
that you want to stay in. DEVARAT SHAH: It clearly
looks like the panel is doing its work by sort of
increasing the discussion. Any other questions? Because I have a
pressing question. And if you don't ask, I'm
going to go to my question. AUDIENCE: Go to your question. AUDIENCE: Go ahead. DEVARAT SHAH: Thank you. I was asking for that, right? So I'm going to
ask this question both for myself and the students
sitting here in the audience. Each of you, clearly,
at some point, started from learning
the tool box-- let's call it
LIDS-style education-- no matter what
background you came from, and then you thought about
a transition, a transition in your own respective fields. How did you decide that that
was the right thing to do and how did you go about it? Because at some level,
it's about moving out of your comfort zone, right? And I mean, there are
lots of implications that go with that in
academia evaluations, in sort of real life,
sort of where you end up and so on and so forth. So it's a very crucial question. So how did you guys
make that decision? And I would encourage
all panelists to take that question. Yeah, please. EMILIO FRAZZOLI: Yes, I
actually have a little anecdote of what happened
to me, what they call the five-minute phone
call that changed my life. At some point, as
some of you may know, MIT has this big
collaboration with Singapore. And at some point,
I heard that there was a team forming for a future
urban mobility, a new project, and I wanted in. So I called the person who was
organizing the whole thing, Cindy Barnhart. I told her about my interest. And she told me,
yeah, OK, you know, thank you for your interest. But, you know, what do
you bring to the table? Well, I work on
self-driving cars. Yeah, but you understand, this
is a project on urban mobility, not a project on robotics. Put on the spot, I had to
come up with something. Because I wanted
to go to Singapore. And I said, what if, what if
you had, like, a smart phone-- at that point, there was
not even a smartphone, it was like BlackBerry
or something-- which you can call a car,
and then the robotic car comes and picks you up and
brings you to your destination. At the time, Uber did not exist. Right? And then she went, OK, that
sounds like a good idea. You're in. Right? So then I started
the whole thing. Now, what happened
to me, at that point, I just made that up because
I wanted to go to Singapore. But then I start to
think, you know what? This is actually not a bad idea. So then I started
asking, OK, so I want to build these
self-driving cars, but what is really the point? How would this technology
really change, have an impact? And then you start
thinking more about that. And then you start saying,
well, actually there is something here. But then I actually, I
started doing a lot of work in that area. As a professor, I was giving
lectures to everybody. And everybody was telling
me, you know, that's stupid. This will never work. All the car companies,
why do we want to do that? I want to sell more cars,
I don't want to sell fewer. So at some point, you know,
I believed in it so much and nobody was listening. And then at some point, well,
you know, what the heck, I'm just going to do it myself. So in a sense, it
was this transition of start to thinking about
why am I doing this work? And then looking at
what is the potential, try to have an impact. And then if you
can't, at some point, you just believe in it so
much that, you know, you're-- at least in my case, I
started doing it on my own. DEVARAT SHAH: Sri
next or anyone. RICHARD BARRY: I mean, I'll say. I don't know, I'd say,
you know, it's not easy to make that decision. You know? And I remember it being
very difficult at the time. For me, it was partly
following the field. So it was a natural
progression in that, you know, WDM links were out there. And Steve Alexander
had left Lincoln where I was and joined Ciena. And if links were
there, then networks were going to come
later was the thought. And I had a belief that,
at least in my area, when the commercial world was
taking at the pace at which it was going, that
research, whether it was at Lincoln or
academia, really wasn't going to be the place to be. It was just too
hard for research to follow, especially
in an area where the components that were
available were so important. So I had decided to
either go in commercial in this area in
optical networking or to switch research
fields, right? And I just decided to do that. I had a commercial offer, too. So I don't know,
between the commercial and starting your own, I mean,
that seemed more interesting. But also my co-founders are
Eric Swanson and Desh Deshpande, without them, you
know, it wouldn't have felt really right. And certainly with me, it
would've been a disaster alone. So it was just a lot of
things at the right time. And you know, the willingness
to take risks, too. Because it wasn't an easy
decision at the time. THOMAS RICHARDSON: Maybe
I could just make-- So at the time that I
joined Flarion, actually there were two startups
spinning out of Bell Labs. And one of them was the Flarion. And the other one was actually
the holographic data storage project. They were both spinning out. And they were both
pretty interesting from a technological standpoint. I think there were a couple of
reasons why I chose Flarion. One was just, I mean, I think
also from a risk perspective, both were pretty risky. Part of it was
maybe just the team that you were going to go with. You have to have
confidence, especially if you're going to go into
an entrepreneurial setting. You have to have a good team. And you have to have a lot
of confidence in the team. And the other thing
was just where did I think I could
bring the most value. And they both needed
the expertise. But on the wireless
side, I felt there was more opportunity for
growth and more things to do where my
skill set would fit. So I think you want
to go into a situation where you can branch out. I think that makes
it more attractive. SHASHI BORADE: I was sort
of lucky to have Lizhong and Bob in the following sense. They encouraged me to do a
different type of internship every time. So I worked at a research
lab at an industry kind of job in summer intern. And then my last one was at
DE Shaw as a summer intern. I really liked it. I thought, wow, I can use all
the theory I have learned, play with real data,
and essentially make impact that could be affecting
the real world in, you know, months sometimes or
weeks or much faster than the other cycles I
had seen in my internships. So that was really fun. And I just thought I
can try it for a while, and if I don't like it, I
can maybe do something else, maybe come back to information
theory or something. So this was fun. And I guess a second
transition was working for an employer to
starting our own venture. That was more of a
decision because we really wanted to work
together, my co-founder. And this seemed like a
good time to take risk. Worst case, you can always
get back some sort of job. So that was the
second transition. HAMSA BALAKRISHNAN: I
think, you know, you said [INAUDIBLE] we all
have two hats, right? So one, is the
math is beautiful. But if you wear a little
bit of an engineering hat, and I think seeing
something, it is hard, but actually seeing something
out there in the real world, it's its own reward, right? So I think, to a
large extent, that's what I think made me
take the transition. I should say,
though, for the kind of problems that LIDS people
here work on and the theory, there is this notion of
the garbage pail theory, especially when it comes to
policy makers and governments, which is, you know, the
theory that you come up with or a solution you come
up with is beautiful, but you give it to somebody
who is a decision maker and they're going to throw
it in the garbage pail. But the first time
something goes wrong, they're going to
reach in, right, and pick the first thing
in the garbage pail out, and that's going
to become practice. So I think, to some
extent, as researchers, we actually owe society. We need to keep that garbage
pail full of good ideas that can become practice
so that, you know, we then don't regret the
ideas that actually go there. DEVARAT SHAH: That's excellent. Excellent. Thank you. SRIDEVI SARMA: Do
you want me to say-- DEVARAT SHAH: Yes, of course. SRIDEVI SARMA: I'll
try to be brief. Yeah, no, I mean,
I think for me it was just the questions I was
most passionate about answering in my life. So in my research
career, at least, I mean, I think with Munther, we
talked about communication constraints. If I put this in feedback,
it's constrained by this. You know, what are our
conditions on stability? OK. It's-- [LAUGHING] No, it's rewarding
in that, OK, there could be some
mathematical challenges. You do some theory,
prove some theorem, and you get some confidence, OK? So that's great. And I was interested you
know, and still am interested. But then if you contrast
that, for me, at least saying, OK, what does deep
brain stimulation do, and you put this
wire in the brain and you dump all this current. And now a person can go
from not walking to walking. That's the question
I wanted to answer. Was it scary? Absolutely. All my colleagues were saying-- I even had one say
you jumped ship. What are you doing? But I didn't
because everything I do today, I use
my LIDS training. And I think what
was most important is that none of my
mentors said anything negative about that decision. It was really, I think
that was important. I think I was impressionable. If somebody said, if either
Mitter or Sasha or Munther said anything like,
Sri, that's too much, then I would have questioned it. But yeah, and then you have
to be a little bit take risk. DEVARAT SHAH: Thank you. I think, unless there's
a pressing question on that high note, I would
like to sort of end the panel, thank all our panelists. [APPLAUSE] Don't go. Don't go. Please don't go. There is important
remarks that John is going to give as a closing remark. [INAUDIBLE] JOHN TSITSIKLIS: Sorry to
disappoint you, Devarat, I'm not going to say anything
deep or that important. I think I already start feeling
the withdrawal symptoms that are going to be coming
to me in just a few hours because this has
been going so great. I feel I could easily have
enjoyed another day of this. It has all been so
stimulating, fascinating. I'm even thinking I would
like to go back and be a graduate student again and
join this great group of people and get inspired. Well, to partially mitigate
the withdrawal symptoms, tomorrow we're showing
the Shannon movie. And the film maker of the
movie is going to be present. And we'll have a panel where
Andrea and Bob will be joining. So please join in that. So I would like now to take
the opportunity to thank, most important of all, our
distinguished honorees who actually honored us with
their presence here. I want to thank all the
speakers and the panelists and the chairs. They were really all very
stimulating, very interesting. I think, I believe
everybody is going to leave with the
same positive feeling that I will be
leaving this event. And thank you all for
being here and joining us. So we'll have a sort of very
informal, light reception out there for people to mingle
for as long as they wish. And before you
exit the room, I'd like to ask, at least those
of us who are still here, to all of us crowd down here
to get a group photograph. All right. Thank you. [APPLAUSE]