The Internet of Things and Sensors and Actuators

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
well good morning everyone I have to to say it's a real joy to show up among a collection of geeks and where I can drop in to geek and nobody will be upset unlike spending time with the legislators in Washington DC where I've lived for the last 36 years we're dropping into geek is a sign of you know they go to sleep okay so here's what I'm going to do this morning I'm going to try to give you a real quick couple of stats on where the internet is right now to give you a sense of the fabric underlying what is coming next so let's start out with a couple of statistics I'm sure you're all well aware that the billion approximately or 900 million machines that are on the net that are visible with domain names and IP addresses are actually only a small portion of the total number of devices that are actually on the net at one time or another there are permanently connected equipment that's hiding behind firewalls it's hiding behind other kinds of load sharing devices we don't know how many there of those there are there are episodically connected devices like laptops and mobiles and pads and things of that sort so the real number of Internet enabled devices that may be connected in at one time or another could exceed 2 or 3 billion we just don't see them all at the same time the number of users is estimated at 2.4 billion that's a small compared to the total world population which means the chief internet evangelist has about 4 billion more people to convert and so I could use some help you know any of you want to assist with that the number of mobile asan around the world is really grown dramatically 6 and 1/2 billion estimated now that doesn't mean everybody has ones a lot of people have more than one and not all of them are internet enabled but certainly 20% might be and in some places it's more than that and over time of course that fraction of Internet capability will increase so that will be a major source of internet devices as well and it's one of the reasons that it's driving us towards new address space structures like IB v6 because of the sheer magnitude of the devices to be identified I thought it was interesting to look at where the people are if we looked at this chart ten years ago North America would have been I think the largest absolute population on the net clearly we aren't anymore Asia is the largest and it's which is consistent with the fact that Asia has about a third of the world's population the other observation is that half a million of half a billion of that billion is in mainland China so even though we hear all these things about the Chinese interfering with the internet which they do they are nonetheless investing very heavily in the infrastructure and making heavy use of it so they too are part of our equation if you're thinking about global services at all these kinds of statistics are important because it gives you a sense for where the markets might be and of course these numbers are increasing day by day the Internet itself in spite of the fact that it will have its 40th anniversary of design next year because it was designed and roughly speaking from March until September of 1973 when Bob Kahn and I wrote the first papers describing the internet so that's 40 years ago it went into operation 30 years ago January of 1983 so it's been around for quite a long time and in spite of all that it continues to evolve and here you're seeing some of the things that are happening on the net just this year the most visible of which for many of us was the introduction of ipv6 on a formal basis that is to say all of us who had were capable of running ipv6 turned and on and left it on as opposed to the 24 hour test that we did the previous year so v6 is now operational it's not widely visible probably three two or three percent in most of the possible sites are running ipv6 as nearly as we can tell but the pressure I think will increase very very quickly because the ipv4 address space has run out an AP NIC it's run out of ripe NCC and it will run out in Arin here in North America probably sometime in the next couple of years so the pressure will be on because network address translation boxes frankly will not do the job they are fragile and if you try to cascade them it just gets worse finding bugs and things of that sort is extremely difficult so running end-to-end ipv6 just seems like the right thing and when we get to the Internet of Things you'll see why that's equally important another thing which has been happening to the net for many many years the domain names were expressed only in Latin characters and of course the Internet Corporation for Assigned Names and numbers and the internet Engineering Task Force spent a lot of time and energy developing methods for incorporating Unicode into the domain name space so now we have internationalized domain names expressed in all kinds of languages or I should say character sets Cyrillic and Arabic and Hebrew and Chinese and Korean and so on the TLD space and I'm sure you're all aware are used to be dominated by the country code TL DS of which there are about 250 or so maybe not that's quite right 230 on the other hand the the generic TLDs were only in the teens like 20 glimmer so I can introduce an opportunity to bid for or apply for new top-level domains 2000 people applied there were some overlaps but the number of unique proposed new generic PLDs is probably on the order of about 1,900 every one of those applications came attached with 185 thousand dollar check this is pretty incredible and you think about you do the math that's three hundred and fifty million dollars Planck plainly there are people who believe that new generic TL DS are going to generate a lot of revenue I must confess to you I am still a skeptic but Google was applied for a bunch too so we'll see what happens we all understand the domain name system as it's currently implemented has a lot of vulnerabilities and weaknesses one of the ways of responding to that to digitally sign the entries in the DNS so that the binding of domain name and Internet address is digitally signed and can be verified when you do a lookup that's being propagated now the root zone is signed and many of the TLDs and lower level domain name zones are now being signed and that's a good thing not yet implemented but very much in under development is another mechanism called routing public key infrastructure the idea here is to try to inhibit or at least reduce the opportunity to hijack address address space by simply announcing it which is what people can do they can look around for IP address space which is not widely used and just simply announce it and make use of that or even try to hijack somebody else's address space there are ways using public key crypto in order to inhibit that in the use of the routing protocols in the BGP for example so that's underway although not yet implemented and then the last three bullets here are simply observations about the environment sensor networks are increasingly common on the Internet the smart grid program that the Department of Commerce and the Department of Energy started four years ago is reaching the point where the protocols that are needed or have become standardized and I'll show you a little bit more about that later in this talk and finally of course mobile devices are everywhere now speaking of devices it's been amusing to watch the kinds of things that have been attached to the network some of you will remember the jokes we used to tell about toaster net was someday a toaster would be on the internet and you would send an SNMP packet to say how burned you wanted your toast and of course eventually we couldn't tell jokes like that anymore because at an interrupt show quite a long time ago somebody put a toaster up you could send it as an empty packet to it then they don't there was later I think in in this picture the next most recent or the earliest application was the internet enabled picture frame I remember somebody running into my office saying man man did you see this Internet enabled picture frame and I was thinking it sounds like something to be is used was an electric for Keenan here but it turns it turns out to be really honest these are quite handy we have all these digital cameras because we have cameras in our Mobile's port for example people upload things to the net and the picture frames download them so if you have children or grandchildren you can sort of keep track of what the family is doing when you get up in the morning you just watch the picture frame loop through photos that have come up from family members of course I'm sure you all appreciate that if the website that the picture frame is downloading from gets penetrated the grandparents may see something that they hope or not the grandchildren so that means security is just as important at home as it is it work these things that look like telephones are just voice over IP computers and that was I think those research go boxes in fact the guy in the middle was the most amazing one this is the famous internet-enabled surfboard I haven't been at my here he's Dutch and I imagine him sitting on the water waiting for the next wave thinking you know if I had a laptop on my surfboard I could be surfing the internet while I'm waiting for the next wave so he puts a laptop in the surfboard sticks and Wi-Fi services rescue shack and now he sells this as a service so you know I also used to tell jokes about Internet enabled light bulbs I used to go around saying someday you know every light bulb will have an internet address until somebody handed me one the other day it's an ipv6 it's an ipv6 radio enabled LED light bulb cost about twenty dollars or so and the controller goes for another hundred and fifty it's an expensive light switch in fact though this is the harbinger of things to come where devices become remotely controllable through a control box or through a web-based interface and that is indeed what we're going to see here's another example it's a sensor network it's a commercial product so this is not me and in the garage with a soldering gun it comes from a company called art-rock which is acquired by Cisco Systems about a year or so though these are little sensors that run on two double-a batteries they I as an experiment I let them run for a year they got down to two point seven volts and the physicist Minh was still working so I was impressed by that these are picking up temperature humidity and light levels in the house and then every five minutes they transmit that data to a controller and the controller relays that data to a storage server in the basement and then I have accumulated over a year's time really good engineering information about heating ventilation and air conditioning so when it comes time to adjust the system it's not anecdotal a real engineering data to do something with I know only equal do that but it seemed like the right thing to do now one of the rooms in the house is the wine cellar and it's important to keep that below 60 degrees Fahrenheit and keep the humidity up over 30 or 40 percent to keep the courts from drying out so that room is alarmed and if the temperature goes above 60 degrees Fahrenheit I get an SMS on my mobile saying you know your wine is warming up this happened a couple of times when the power went out the cooling system failed and I would get little messages saying your wine is getting warmer one time I was away for a week and nobody else was there to reset the system so for every five minutes for three days I kept getting this formula I got back and the wine was at 70 degrees which is not the end of the world it's not good so I called the art truck guys and they said you make remote actuators and they said yes and I said do you do strong authentication because there's a 15 year old next door and I don't want him messing around with my wine cellar so you know this was a weekend project to install that capability and then I got to thinking you know I can actually tell if somebody went into the wine cellar when I'm not there because I can see the lights have gone off and on but I don't know what they did in there so I was thinking well maybe I should put an RFID chip on each bottle and then I could do an instantaneous inventory to see if any bottles of left the wine cellar without my permission then I was boasting of this design to one of my engineering friends who said there's a bug I said what do you mean there's a bug and he says we could go into the wine cellar and drink the wine and leave the bottle so now we got to put sensors in the cork yeah and as long as you're going to do that you might as well sample the esters and figure out whether the wine is ready to drink so before you open the bottle you interrogate the court and if that's the bottle to get up to 75 or 80 degrees if that's the bottle you give to somebody else who doesn't know the difference in it so these are actually use this is a useful thing to have around the house honestly I think sensor networks are going to be very much with us and very much on the internet because they can provide so much important feedback not only from a purely monitoring point of view but the same sorts of things can be used for security and other purposes you can imagine webcams as well as sensors that are detecting motion and things of that kind all of which are quite common the ability to remotely manage and observe is very important and that feedback loop is going to be important from an environmental point of view so I would say that we don't always understand the consequences of our actions and if we get feedback about how much you know why did we have an electric bill it's this much this month we get some information about what choices we made that led to whatever those costs were so this kind of feedback loop may actually help us do a better job of managing our response to environmental problems including global warming okay now I want to shift now into some observations about Internet's of things I have no answers for you this is purely an exercise in speculation and questions about what we would do I want you to imagine for just a moment that most of the appliances that we use at home at work in the car we carry around on a person have been internet enabled they have ipv6 addresses and we have to figure out what does it mean to interact with these devices what does it mean to manage them who gets to manage them what is the authorization process how do we make sure that a controller doesn't accidentally control something it shouldn't be managing how do I can't make sure that the fifteen-year-old next door doesn't get control over my appliances my entertainment system I can imagine walking and finding entertainment I hadn't anticipated or ordered so the first really tough problem and that's why you're a particularly important group to ask to think about this you think about large systems you think about architecture as you think about protocols and structures and procedures and things like that on a large scale it's trivial to do this for a dozen devices it's not trivial to do this when you're talking billions of devices so one question is whether in a residential setting which try to separate this from residential to factory for should we have one controller for every device aligning of every kind that might be internet enabled should it be a local device should it is it possible or should it be possible to remotely manage and control these devices do you think that there should be multiple devices maybe we should group these devices for different functions one is a sensor network the deal with security and environmental control another one is a controller for all the light bulbs another one is a controller for entertainment equipment I don't know the answer in any of these things and we are going to the point I want to make is we're going to have to walk through every one of these questions and more in order to figure out how to design and build a system that manages billions of devices that are internet enabled who else should be allowed to manage these devices or control them or see them example I think I'll come to some cable systems examples so let me leave that off for now but you might decide that you want some of your devices to be managed by a third party for security for example you might some want someone who is monitoring the devices at all times even when you're not there so that means you need to sign them the authority and Tolan devices that this third party has the authority to access the device to see what it state is possibly even to change its state another question is what happens if you have a situation where you're in an apartment or a condo complex and now you have a large number of devices all in the vicinity of each other and if it's a radio based system how do you make sure that the right controller is connected with the proper device and not mixed up and there are people who would be very eager to exploit the possibility of observing and possibly controlling the devices in your in your system especially the security devices so here this is a non-trivial kind of exercise should it be wired should it be Wireless will it be both and I submit to you that we will end up with a mixture of those things sometimes it's convenient to put wired systems together but often it's inconvenient and certainly if you're moving from one place to another that doesn't have a pre-wired environment in order to have wires in the right places radios and it turn out to be very attractive but it also has the awkward problem that penetrates the walls and it might be detectable and possibly exploitable by others out on the factory floor this is another interesting possibility I would guess that there are going to be multiple controllers for many kinds of devices that are out on the factory floor and may be group by functionality or by some other organizing principle an interesting question is whether this is a very flat structure where you have groups of devices that are controlled by a controller or you have a hierarchical structure where there's some overarching system that is cascading down through multiple controllers in order to manage an increasingly large system scalability again is a big issue here what about direct interaction between the devices an example in the in the case of the art-rock system those devices are storing forward devices they actually form an ad hoc network they're not mobile in the sense of moving around in real-time they are movable so that you can install in whatever you need to or move them around at need they form a network automatically and that network changes depending on radio connectivity so those are often called mayonnaise which is mobile ad hoc network or Fannie's right I don't know how to pronounce that fan a fixed ad hoc networks these things are quite powerful because in the absence of having a wired connectivity you have a system that's dynamically adapting to local conditions to maintain connectivity and that's how the art-rock system works in my house and then an interesting question when we think about the Internet's design and how we manage it we know that we've grouped routers into autonomous systems primarily for two reasons one is that boundaries of authority who owns and operates this network and also at boundaries of internal gateway protocols versus external so I wonder whether this notion of autonomous system would help us in this environment of devices where perhaps a device controller is thought of as the moral equivalent of one of the AAS border routers and it's managing a collection of devices inside its scope of authority I don't know whether them that autonomous system model necessarily informs us deeply of what the architecture of these systems should be but it certainly has scaled pretty well and it has allowed us to express boundaries of authority and control and illustrate differences among groups of routers running different kinds of protocols so it might turn out to be a helpful concept for us to to think about now configuration and management of course is everything here because you think for a moment now not only about scale but also about how you get started how do you actually configure these things how do they get their internet do they make them up but automatically do we have to assign them is there how do we assure uniqueness of those addresses and certainly we don't want to devices to have the same address and you know have the controller get very confused about whether it did or didn't change the state of that device and the other interesting problem is that you can't solve this problem solely by thinking okay I'm starting out with ground zero solder devices don't have any addresses and I'll configure the whole thing and turn it on done and then the next day somebody shows up with another device and you have to know do you want to do the whole thing all over again or do you want to have an incremental ability to add something I submit that incremental is important so we have to work our way through case analysis I don't know how many of you are believers in that but I am a really true believer in case analysis because if you don't do as many cases as you can think of the one you don't do is the one that will kill you later because you didn't think of the problems that it poses what about Auto discovery and here I'm thinking about devices discovering each other or controllers discovering devices if it's if you think about incremental if you think about devices being turned off and turned back on you think about battery failure and all this sort of thing you have to discover these things and there are several ways to do it more than then I listed here one of them is that you periodically try to figure out the controllers try to figure out which devices are present or maybe the devices announce themselves and the controllers are listening maybe other devices are listening in order to discover that the device needs to be brought into the system of course there's an interesting issue here about some cases where device shows up that you don't want to be in the system how do I reject the device even though it appears to be in the orbit of mine control system this is to avoid the guy that wants to go and hack hack your network he's in the apartment next door he was the the rug cleaner who came in and installed a little gadget that you didn't see under the rug all those sorts of things so authentication is really important and you really need to have some sort of a way of assuring that the devices that are part of your system the controllers that are that are controlling the devices have the authority to do that that you granted them so I would submit to you that cryptography and public key encryption ssin and digital signatures and things that kind may turn out to be a really important element in the way this thing gets designed we need them to know when when a device is communicating with us whether it's a controller or another device that we are supposed to be cooperating with we need to have a pretty good way of saying I'd like to confirm that you're a device that I should be talking to in the back of your head I want you to think a little bit about public safety communication because the situation that we get into and public safety communication is mirrored in some ways in these this particular kind of challenge plainly the controllers and the devices have to be able to authenticate Bluetooth has an interesting feature and many of you use Bluetooth enabled devices and the protocols for getting them connected involves typically having a device known it makes itself visible and you get to decide manually will I allow that device to be part of the system or not whether we use the Bluetooth protocols or not or it's open to obvious debate but the idea that there is a way of not of sensing a device but not allowing it to be part of the system without some kind of authorization I think is very important not every device is going to have a display and so this gets into other kinds of problems of how do I know what's going on young with that with a particular device especially during this configuration process our web tools available could we create web tools to make it easy to configure these systems the implication of that is you know where is the web server that is presenting a web interface for us in some of these systems the controller actually offer a web-based interface to you present the system and the devices that are part of that controllers ambit are presented to you in a web-based way one in question is for every one of these devices it's going to have some kind of state information and the question is simple ones like the light bulb which is off and on or broken I don't know how you tell the difference we I guess you can tell the difference we nan off and broken in theory by trying to turn it on and it doesn't come on it must be broken anyway having the ability to look at state is important having the ability to set state is even more important and more delicate who can set what state and under what conditions can more than one controller control the same device how do they coordinate that should we should we be pulling these devices sometimes the problem with that is it might take a really long time if there's a large number of devices every one of you who has ever tried to manage a bunch of routers has already encountered all of these problems do i poll do I use alerts do I generate alarms what if I get two million alarms an hour is all that you know headache and then what protocol should we actually be using is SNMP an appropriate framework for that with MEMS and so and and so on so I don't know the answer like I say to any of these things I suspected some people who have built systems like the art-rock system and others have already answered some of these questions so I didn't do my homework and go and look to see what answers they came up with but I would suspect that those won't be complete given the significant scope of all this and then I've already alluded many times to how we avoid the possible accidental or deliberate hijacking or accidental joinings of devices to the wrong community now there are some examples of large numbers of devices that are part of controlled systems set-top boxes being a good example I'm going to use FiOS because I happen to be a subscriber and I had a chance to talk to the guy that design files so we covered his whiteboard with pictures of all the various devices what protocols were being is back and forth it's an IP based system which is kind of cool the it's possible well here's how it typically works the cable company in this case Verizon I guess they're not a cable company they're officially a telco you know you know what I mean provides the configuration for the devices of the devices come in they get attached to the net by the installers and then there's a whole series of boot up processes that configure the devices the controller of the device is actually of these devices is remote it's at the head end of the of the system the in this particular case I can go to a web site that knows about the devices under my account and so once I log in it knows what my account is it knows which devices and device IDs it is supposed to have access to and it presents to me what the state of those devices are I can see what programs have been recorded which programs are supposed to be recorded and I can modify that there's even some direct interaction between the set-top boxes now so that example if you have one DVR and it's recorded a bunch of stuff and you go to a different set top box you can watch the streaming video off of the DVR on any of the set-top boxes and it turns out they're doing video over IP which is also kind of cool so the set-top box is actually in terraria actually interacting with each other one remote device without the DVR can interact with the DVR and erase things or schedule things so that's one kind of paradigm the art-rock system which i've already mentioned is that it's a manually configured device at least the way I set it up but it runs this mobile ad hoc or at least fixed ad hoc network automatically so it does store-and-forward of the content and maintains the connectivity to the control system I actually added some routers of their recommendation just because radio connectivity was always a little tricky and they boosted the signal and then there's a controller that manages the system provides a web interface though the thing which I guess I would draw your attention to is that there will be a pretty big diversity of devices that are a part of this ecosystem and their functions will vary dramatically so the question about how to organize an architect the system is going to be influenced by that diversity and imagining a common system for all such devices sometimes makes my head hurt but I think that there is going to have to be some commonality and surely some standards because we want things to interoperate we want to be able to buy a device which should be controllable and make it controllable by some other control system that we got if we have the right set of standards we get the sort of thing we got out of the internet which is a large number of competing sources of interoperable systems and that I hope would be the case for this Internet of Things I wanted to just draw your attention to the smart grid program because it's another it's not a work as in done example but is a work in progress trying to manage electricity and consuming devices it also by the way has to deal with electricity generating devices and this makes a big difference so it started out as a system designed to allow devices not only to report their usage of electricity but also to respond to what's called demand response signal saying we are approaching the peak load you may want to not heat the water now or run the dishwasher now however if you do it will cost you more because we have peak low pricing and so on the idea was to allow the consumers to express their preferences when it comes to Pete load usage and to have the devices in fact be under the control of the power generation system in order to clip peak load to avoid the obvious problem of having to build capacity to meet the peak load which only gets used two percent of the time and if you only use a facility two percent of the time then the cost per kilowatt hour of that facility is very high so if you can clip that that's a good thing for everybody the smart energy profile version 2.0 is layered on top of ipv6 and some other structures that was a big battle to get this group to accept the idea that ipv6 was actually implementable in small devices there were a lot of people who still think that's impossible despite the existence of you know the light bulbs and other things that can do it it's like tcp/ip in the early 1970s everybody said you couldn't build tcp/ip for a personal computer you had to have a giant mainframe so Dave Clark at MIT said that can't be right he went off and built tcp/ip for a PC and said C and the same sort of thing is true for v6 and in any case there is a big issue here in smart grid about safety and security people don't want remote inappropriate remote control of their electricity consuming devices they certainly don't want the devices to to be to operate in an unsafe manner but the big problem as I see it the big challenge is consumption versus production of electricity it's one thing to have devices that just consume electricity and tell you about it and agree not to consume it under certain conditions it's something else to have a device that plugs in and can push electricity into the system in the traditional electrical grid it is common to have central production and distribution the system that's producing is trying very hard to stay in sync with demand and the way the system works is you should produce no more and no less electricity then is demanded and that's you know pretty tricky tough problem to to manage them from most part it's pretty impressive that it actually works but as we get into a situation where there is possibility of generating electricity in a distributed way with solar power wind geothermal and in fact also electric cars whose batteries might actually push energy into the system instead of pulling it out we have a much more complex control problem because it's going to no any any of you who must know about control theory know that when you have a highly distributed system and there are delays for state information to get to various parts of the system you have the possibility of making decisions based on old data this is known to have instability problems so this is a huge challenge for the smart grid program to cope with an anticipated future of a highly distributed generation and control there's even a lot of discussion now about micro grids or nano grids where neighborhood has a power generation capability and these are all can be decoupled from a more centralized grid in order to avoid you know major blackouts cascading through the system so this is all going to be a really interesting exploration and those of you who may in fact have responsibility for building systems associated with this are going to have a really interesting decade ahead there are all kinds of questions here then what can the users actually say about the way the devices manage their use of electricity what kind of information can you get back to analyze your use of or the way in which your appliances are using electricity could there be multiple parties looking at that data giving you different analyses and different advice about what to do I want to switch gears now which bill to your permission I won't have been going on now for over 30 minutes 35 minutes I wanted to tell you that the Internet is under threat right now some of you are presumably aware of something called the Internet Governance forum which is a multi-stakeholder meeting that takes place every year enhanced for the last seven years it arose out of something called the World Summit on the information society when that first meeting was held in 2003 not a wisss the question this was an intergovernmental meeting and so these are diplomats and the first question was what's an information society and somebody says well the internet is sort of looks like that and then they said who's in charge of the Internet and we said well nobody it's distributed and they didn't believe us they absolutely refused to believe that it was possible to have a distributed system of this scale and magnitude that wasn't centrally controlled so they looked around for who had any central authority and they picked on I can as you may know I was still chairman of ICANN around that time and so of course they were looking at us asking you know who are you and why do you have control over the Internet we said we don't have control over the Internet we we support the Internet by trying to make sure that Internet addresses are uniquely allocated domain names are uniquely allocated so we don't don't get tied up in our shoelaces but they believed that this was control and then of course they observed that I can has a contract with the Department of Commerce therefore the United States has control of the Internet and although there are probably some legislators in the United States who would like to believe this in fact this is a highly distributed very very collaborative very bottom-up environment just as yours is so this forum anyway did involve the civil society the technical community governments and the private sector so this is a multi-stakeholder activity the International Telecommunications Union on the other hand is an intergovernmental organization which was found 1865 it was originally called the International telegraphic union or telegraphy Union I guess is the correct thing then when the telephone was invented in 1876 they changed their name to the International telephony union and then as other kinds of technologies came along they became the international telecommunication union because radio had come along and television had come along satellites come along they're broken up into three parts there's the ITU D for development and to their credit they actually do a lot to spend money to improve infrastructure especially in third world developing countries there's itu-r which manages the allocation and use of radio frequencies and that's again as in an intergovernmental arrangement and then there's itu-t which stands for telecommunications and that's the standards making organization of ITU to be a little pejorative and to try to be fair itu-t has had a big role to play in a lot of telecom standards especially things like ISDN and X 25 and broadband ISDN ATM these are all relatively lower level and in many cases not very long-lived standards however they have not had any direct responsibility for anything related to the internet because we just ride on top of them you know that old expression IP runs over everything including you if you're not paying attention so we've always dealt with the I to use standards as simply another bearing structure for IP however what has also happened in the course of the last decade is that all of the telecommunications applications that used to be done on specialized networks have migrated over to be applications on the Internet so voice over IP streaming video on radio really streaming video and audio all of this stuff has risen up to application level in the Internet and the other specialized networks are becoming less and less important and therefore many of the standards that ITU is responsible for become less important in the natural reaction of any institution that wants to preserve its existence is to reach out for new territory in order to have an excuse to continue so the ITU held recently in Dubai something called whit's of the world telecommunication standardization assembly which talks about standardization and they brought into the discussion all kinds of things about the Internet one of the more controversial things was a standard called why two seven seven zero which is a standard for deep packet inspection and there are lots of countries around the world that would like very much to have a standard way of inspecting all of the packets that go through the internet so this raised a lot of eyebrows and some concern the ITU world conference on International Telecommunications is ongoing now in Dubai also and it is negotiating a treaty the treaty is called the International occasions regulations ITRs the last time they looked at the ITRs and had this meeting was in 1988 the internet was had been in operation for only five years not counting all the experimental stuff that we did in the preceding decade so you can imagine this is a diplomatic thing these folks couldn't spell internet couldn't spell IP most likely so you know they had no no idea you know about the internet so it wasn't incorporated into the ITRs which was a good thing and for 25 years the internet was untouched by these international telecommunications regulations was mostly talked about charging structures settlement rates you know interconnection practices of the underlying low-level transport systems but come 2012 they reconvened the meeting and the debates right now all Center on language in the international telecom regulations or proposed language which would invoke internet in either a direct way or in an indirect way an example of an indirect way is to incorporate language and the ITR is speaking about spam now as soon as you talk about spam you're talking about electronic mail you're talking about an application on the Internet and content never in the past history of the ITRs as content been an issue so can you imagine having international regulations about what you're allowed to say on the telephone I mean come on so that's what these folks are some I mean let me make sure you understand it's not the ITU itself that's the threat it's the member countries who are represented 193 countries are there including the Russians the Chinese the Brazilians the Indians the South the Saudis and South Africans and so there is a collection of governments some of which some of whom are very authoritarian and who are threatened by the content of the internet because they watched the Arab Spring for example or because they see viruses and worms that Trojan horses that are propagating through the net their concerns are not unreal if you're an authoritarian government that is threatened by freedom of speech the Internet is the most democratizing engine ever invented for freedom of speech and so it's a threat to some of those countries so they are bringing those who feel threatened are bringing proposals to the table which are draconian in their character I can tell you the u.s. delegation which consists of about 140 people are absolutely dead set against the ITRs evolving into a device for the control of the Internet but we only have one vote and so they're 192 other countries that can express their opinions in theory nobody votes in theory it's a consensus the problem of course is how do you define consensus now we know how to do it in the IETF right we hum and if the decibel level was high enough we assume we have adequate consensus but selling that to a bunch of diplomats is really tough so we're probably not likely to get there but the claim is that there will be consensus and not voting that's probably a good thing except for one question who decides consensus has been reached and how is that determined and that's not well defined so we don't know what the outcome of all this is going to be let's see my god it's 45 minutes and Counting let me let me do this I'd like to ask you to just look at these bullets for a moment and remind yourself that we all have responsibilities as systems engineers as system managers and designers to make systems as safe as we can to make them as secure as we can although safety I think is at the top of my list the reason I'm so concerned about using the word security is it very quickly devolves into a debate about national security and then you get into things like what kinds of security respond as national responses should there be should we be launching cyber attacks against the what we think is a national scale attack against us and the problem of all those kinds of thoughts is that attribution becomes a huge issue if you are not absolutely sure who done it and you launch a counter attack in any mode at all whether it's a cyber counter-attack or god help us a conventional military attack or worse you better be damn sure that you have good attribution because if you don't we're in real trouble and in the cyber world it's not very hard to launch a false flag attack and so you can imagine launching against the wrong and the consequences of a launch against the wrong party you're just almost incomprehensible it even gets worse domestically let's think for just a second you're under DDoS attack there's lots of pcs that have been compromised in the United States owned by innocent parties you don't know their machines have been compromised and we feel we you know that people defending some systems say we're under attack we're going to launch a cyber response and so you launch this response against all of the IP addresses that are apparently attacking you and the plan is to wipe out their disk drives and it turns out we just wiped out two million pcs that belong to innocent Americans whose pcs might have information on them that was in fact very important to their daily work and maybe even very important to our welfare so we just shot ourselves in in two million feet so not a good idea the whole point here is that we really are going to have to think hard about how to build responses against these kinds of problems the problems are real the responses must be really must be realistic this bullet about digital signatures just one comment nobody knows what the legal weight is of a digital signature and so if there's a dispute over a contract that's been signed digitally it's going to be a really interesting question about who gets the wins and who doesn't win we're going to need international negotiations on this because there are contracts across international boundaries and this last thing digital vellum I just do me a favor if you will look up in just Google bit rot you'll find a rant for me somewhere in the responses about bits that don't have any meaning because the software that created them doesn't work anymore because it doesn't you don't have the right operating system or there's some other problem it's a huge huge problem and we have to figure out how to solve it now I think what I would propose to do in the interest of time and to allow for some interaction if you wish it is to skip through a couple of slides to give you a sense for what I am skipping over I'm sorry oh you really want me to keep blasting along alright you really weren't well in that case let's go back to this digital vellum you're sorry you said that didn't even mean honestly this is a this is a really interesting problem it's at least as interesting is this whole question how the hell do I configure a billion devices if you think a little bit about what we do every day when we use applications like something to create a document text document or a spreadsheet or a presentation or something the resulting files are pretty complex objects but those bits don't mean anything unless the application software is available to help interpret them and I'm sure many of you have experienced what I have occasionally you'll keep tracking the new version of the application and you keep generating new files and then one day you want to go get some previous file that from some distant past and pull it up and use some the content in it I had occasion to do this the other day I'm running a Macintosh I'm using the Microsoft Office 2010 and I pulled up a 1997 PowerPoint file and I got tried to open it up and I got back at what's that I don't know what that is and so that was disappointing by the way I'm not poking fun at Microsoft here yeah this is a problem that would occur even if we were using open source software over long periods of time it isn't clear how to maintain this kind of functionality my long-term model in my head of this is ear 3,000 yeah you just did a Google search and you turned up a 1997 PowerPoint file and you're using Windows 3000 and you know it doesn't work and the problem here is again maintaining compatibility backward compatibility you can't really do that forever the old formats so ultimately you just can't afford to support every format ever known and so you have crutches that get you little ways forward you may be able to write software that translates an old for me to a new one but even that is not something should be relied upon there's other problems too even if the software is available to you but you are forced for whatever reason or another to run a new version of an operating system or and go to a different operating system that piece of application software may not run on that operating system so now you have a problem again or the company goes out of business and the software is no longer maintained or it's up it upgrades the software and is no longer backward compatible in the case in that particular case of loss of backward compatibility you go to the company and say well can I can I get the source code for you know this application so I can fix it and then the answer is no and why not he says well because half of the stuff from the old thing is part of the new thing and I consider that to be proprietary and you can't have the source code or if they go out of business entirely and you say can I have the source code the answer is no that goes to the bankruptcy court and it will be sold as an asset so we don't have good tools right now as engineers to try to solve this problem in mail and if you have you know just gets worse you know does the operating system work with the old software you know ultimately you may need a particular piece of hardware in order to run the operating system that ran the old application in order to interpret the bits of the files which leads me to speculate that maybe cloud systems will help us because we can run kernels that a mule allow us to emulate anything so we might have to emulate an old piece of hardware that runs an older version of an operating system that runs an older version of an application in order to interpret the bits and that's not totally nuts but it is painful it may turn out that that is needed the other thing which seems to me clear is that we may need to have a redefinition of fair use because you will have invested in an enormous amount of time and energy creating these digital objects and to be denied their use is just wrong and so finding a way to say that people who create application software have to provide for the case that there is no further ability to interpret the older files there has to be some legal framework in which you can get access to something to redress the problem we have not had that discussion and I suspect that we're going to have to get the intellectual property people software people the legal people together to discuss what framework would permit us to solve this problem if we don't do something by 2100 our 22nd century descendants will wonder about us but they won't know much because a lot of the files and data that we accumulated won't be interpretable and when I look at real vellum which is why I'm calling this digital vellum things that are a thousand to 2,000 years old these Illustrated manuscripts on sheepskin are stunningly beautiful they're quite readable assuming of course you know ancient Greek they're Latin or something like that but at least it's visually presentable I mean if you know the language you can read it of course if you don't know the language it's like not having the application software that helps you figure out what it all means so this is actually a big deal all right let me go to this next one some of you I hope are keeping track of new technologies that have come up for networking OpenFlow is one of the recent ones coming out of Stanford University NIC McCune and group our car and their team that's now become commercialized there are companies that are making open for routers we're quite interested in this at Google we invested in building our own open flow routers and put them into all of our data centers so we replaced all of our networking gear with open flow capability it's really quite stunning because it's a centralized design which means there's some scaling issues but it works very well inside the data center and it has the property that you can do a very good job of managing where the flows go in your underlying transport system so we are able to get very very high percentages of you out of the optical fiber Network which we otherwise might have might have gotten let's say two-thirds of that kind of performance so it turned out to be a very very important investment for us I have huge respect for the team that did this I think they did it in about six months writing the system and replacing or injecting the system into all the operating data centers an amazing piece of work there are some questions in my mind about how scalable this architecture is because of the central control which is giving it a lot of its efficiency the thing which is interesting though is if you draw a circle around an open flow Network and you draw a circle around another one and you ask how do they interact with each other if you can't run centralized open flow algorithms we're going to have to have BGP like interconnections which of course is what we do now in order to bind these systems together at a higher layer of architecture but the thing which makes open flow so interesting is not its ability to manage the flows it's the fact that it doesn't need to use the address bits exclusively as to populate the routing tables so just think for a minute about a router that doesn't look at just address bits but it looks at any bits in the packet that it wants to to figure out what's a flow what if it didn't look at the address bits at all what if it just looked the other bits in the packet and having a routing algorithm which announced those bits to populate the routing table the routing table actually doesn't know what's an address the routing table knows I got bits in the table in this entry and I have a matching algorithm and if it matches I'll go wherever that routing table entry says to go it doesn't say anything at all about whether the bits mean addresses or something else addresses are an attractive organizing principle especially if they're structured in some way so you can collapse the routing table entries that's a good thing for Internet because if we couldn't do that we'd run out of memory for the already 450,000 entries the global routing table but there are cases where having content directed networks or content centric networks using different kinds of bits than address bits is an extraordinarily powerful idea so these guys are breaking out of an architectural design that limited the view of networking as addresses at routing based on termination points that had addresses so that's the real interesting thing about OpenFlow we've already talked about configuration management and access control in the internet of things so I won't go on any further with that the certificates have become a very important part of our environment for digital for the use of public key public and private keys the problem with the certificate authority system right now is that first there are a lot of self declared certificate authorities and many of them are believed in the tables of browsers for example some of them have been compromised either because they have been penetrated or because someone has been bribed or someone just wanted to go cause trouble so the question is which certificate authority should we believe and it's not always clear you can imagine artificially limiting the list of certificate authorities that you will Excel that means that you have to have control over what the browser's are doing for example it's not just browsers means certificates are used for all kinds of things so you have to have control over the tables that say these certificate authorities are believable and credible on these and you know don't believe anybody else if you don't have that kind of control you have a problem so one of the interesting questions is why is it that the certificate authority structure has these weaknesses and one answer apart from straight vulnerability questions is that they can certify too much think about the strings that you can have it appear in a certificate they're not limited to anything you can put any string you want to in there any domain name you want to in there and have the certificate authority attest to it by digitally signing it and so the problem we run into is that any certificate authority can say this is a valid certificate for WWE Microsoft comm or W Google comm the reason that they can get away with that is there's no constraint over what a certificate authority can certify so one of the ideas in the IET up is called Dane and I've forgotten what the acronym stands for and I bet a bunch of you know in any case the idea here is to put the certificates into the domain name system and to limit the expression that the certificate can attest to to be within the zone of that place now if you use DNS SEC in theory you have a signed sequence all the way down to the zone and you put a certificate inside the zone you now have some protection that somebody can artificially inject the certificate into the zone that falsely certifies a particular domain name unless they managed to penetrate this digital signing chain and the nice thing is that even if it's penetrated you could automatically reject anything that is outside of the zone because you know that's its canonical restriction that says you would not ever accept a certificate in a zone that isn't within the zone scope of domain name so that's a way of telescoping down some of the risk factors and that is in fact underway another thing which is I think equally important and something which has been forgotten I think in history is the use of hardware to reinforce software security many yeah amen to that may all of your packets land in the right bid bucket what back around 1964 or so at MIT they had this project called project Mac which stood for multi access computer and I'm sure you all know about that because UNIX comes from the opposing you know unitary version of multi access computing the thing was interesting is that the MIT guys modified if I remember right it was either a GE or a honeywell 635 they turned it into a 645 by putting in or putting in virtual memory and traps for attempts to touch memory that you shouldn't touch and also eight rings of control which would limit which instructions you were allowed to execute and so they had a little kernel software and if you tried to execute an instruction that you shouldn't because you didn't have permission because you were in the wrong ring of control you got trapped down to the kernel and the kernel said who the heck are you and who are you to execute this instruction and if you could provide bona fides then you could execute the instruction because they put you in the appropriate ring of control otherwise they didn't so this was a combination of hardware and software working to reinforce security a lot of the x86 chips maybe all of them have those functionalities in them and nobody's writing any software for them so we need to revisit that idea there's another example of that recently chipsets have been developed that changed the BIOS firmware to include a digital signature the hardware will not execute the BIOS unless the digital signature checks the hardware will not install a new bios unless the new BIOS digital signature checks the idea here is to try to protect the most vulnerable moment in most computers lives when you turn them on and they download the operating system with that BIOS micro code so that's another example of hardware and software reinforcement of security I've been trying to get the guys at Google to rename our operating system from Android to paranoid be they the marketing people don't think that's a very good idea but the my motivation for this was to say that we need paranoid operating systems and browsers that are ultra suspicious of everybody and everything and particularly if anything coming in is think about the major infection problem we have today is the browser which goes innocently to some website and downloads the homepage it's just a file and it interprets the file 20 years ago it was tax HTML and some images and pretty innocuous stuff now it's Java JavaScript and Python and Ruby on Rails and all these other things and the browsers don't know what that software is actually doing when it's interpreted and if the browser is running a too high level of privilege in the operating system which is a common problem then when that code gets executed something gets stored down in the operating system which is the worm or a Trojan horse or virus or something else so we have these browsers that are 2 lakhs we have operating systems that allow too much privilege and are vulnerable we need to work on that if you're looking for a really cool dissertation topic cloud computing is in its infancy we are in cloud computing land today we're internet was in 1973 in 1973 we had networks there was IBM's SNA there was digital's decnet there was HP SDS and there were others and they weren't we even had X 25 well not quite in 73 that showed up around 76 so we had these networks but they didn't know anything about each other every network thought it was the only network in the universe and that was ok because the universe was all the IBM machines that you had bought or all the Dec machines so of course Bob and I said well we can't have the defense department trapped into having to buy only one brand of computer in order to have a network so that's why we were motivated to do this non proprietary thing the cloud computing situation seems to be very similar we have clouds at Google we have clouds at IBM we have clouds at Microsoft we have clouds and Amazon they're all not quite the same they're a little different and everyone of them thinks it's the only cloud in the world we don't even have a vocabulary that says take something from this cloud and give it to that cloud and so one of the things that I believe we need to work on and there are a lot of people poking around in this space is a set of standards and protocols that allow us to move data from one cloud to another now I want you to think about this for a minute because in a lot of the cloud systems there's access control around the data its metadata we certainly try to do that at Google we don't want everybody else reading your email except you and our programs that figure out whether or not we should put up an ad that's but we don't we don't share that with anybody else that's all strictly internal so the point here is that metadata has to move between the clouds at Google we have a data Liberation idea where if you put it in you should be able to get back out again and we believe in that but I don't think that it makes sense to say well you I know you've accumulated a petabyte your data in our system and you can easily download it to your laptop you know in your reaction this is huh maybe in Kansas City but no place else right the gigabit network so the point here is that we should have the ability to say take this data from cloud a and move it to cloud B move all the metadata to go along with it so it's access control getting standards for how you express the metadata identify how do we commonly express identifiers for people or programs or something that have access to the data how do we make that standard so there's a long ways to go and if you want to get really ambitious in theory we should be able to run programs in multiple clouds have data go back and forth between them taking advantage of the particular features of each cloud to do certain kinds of computation we don't have protocols for that either so there's a lot of stuff we can do finally I want to mention something called delay and disruption tolerant networking I will amplify on that given if I have time in the discussion about the interplanetary internet which I'd like to end my talk with but just generally speaking we have Network environments now which are not always while I mean I'm setting aside the fact that we have cable cuts and things in the underlying internet a lot of that connectivity is pretty stable but when you have radio based systems your Mobile's for example don't always get three you know five bars sometimes you get one and a half and sometimes you don't get any sometimes it varies and so we have environments that are very where in which connectivity is is not always guaranteed and the consequence of losing connectivity is to introduce delay because it takes time to get back into connectivity and so if you're trying to transmit something and you lose connectivity in the internet world we tend to throw stuff away because if the package shows up and the routing table says there ain't no way to get there we say well we don't have any place to put it we throw it away in delay interception tolerant networking we take a different view if the connectivity goes away we say okay well hang on to that this for some of you this is kind of like message switching if you want to go way back into history and it's only conceivable and feasible because memory costs and come down pretty dramatically you want to do an interesting calculation a couple of days ago I bought a terabyte drive for $100 you can get a three terabyte drive for 150 dollars in 1979 I bought a 10 megabyte disk drive 10 mega byte disk drive the size of a shoebox for $1000 and I was pretty happy camper because the alternative was five and a quarter inch floppies you know to the ceiling so I said gee I wonder what would have happened if I tried to buy a terabyte of memory in 1979 and you will amuse yourselves if you do the computation terabyte of memory $1000 do the math you know 10 megabytes divide into it it's a hundred million dollars so I have a lot more respect now for that little 1 terabyte disk drive I didn't have a hundred million dollars in 1979 and to be honest I don't have a hundred million dollars now either but I can guarantee you that if I'd had a hundred million dollars in 1979 my wife wouldn't let me spend it on disk drives we would have had better ideas than that but the fact that memory is is becoming very very inexpensive allows you to consider the possibility of storing information in the net and forwarding it when connectivity becomes available which leads me now to the interplanetary Internet I first began talking about this in late 1970 1997 and I met with a team at the Jet Propulsion Laboratory in early 1998 and we within a few seconds of our conversation were completing each other's sentences because they too were thinking how can we improve networking for Space Exploration we've been using point-to-point radiant links since the 1960s to communicate with spacecraft we were thinking a lot at that time about Mars because it many missions were planned to place things in orbit around Mars to then plan where the Rovers should be landed in order to continue to explore Mars and so we were thinking what would happen if we had a networking capability for space exploration so our first thought was well why don't we use tcp/ip you know it works on earth it ought to work on Mars and honestly would work on Mars if there's low enough latency and everything the problem is between the planets the speed of light is too slow it takes three and a half minutes to go from Earth to Mars when we're closest together 35 million miles it takes 20 minutes one way to go from Earth to Mars when we're farthest apart in our orbits 235 million miles tcp was not designed for 40 minute round-trip times the flow control doesn't work very well and then there's this other problem called disruption its planetary motion the planets are rotating and we haven't figured out how to stop that so if there's something on the surface and the damn planet rotates you can't talk to it till it comes back around again and satellites have the same problem so we said okay we have a problem because we have variable delay in the solar system based on Sol the celestial motion and we have this disruption caused by monetary rotation among other things I mean the Rovers that are crawling around on the surface that are using radios may have the radio obscured by local features like olympus mons and things of that sort are in on even on the moon we have some problems like that because of the topography so we said alright we're going to have to design a new class of protocols we started out calling them interplanetary Network protocols but we quickly recognized this was a specific case of a more general one of delay and disruption tolerance so what has happened since that 1998 beginning is that some prototype work was done it wasn't exactly the DTM Bumble protocol which is currently being standardized it was a predecessor called CFD P which is a custody file delivery protocol I hate acronyms because I can't remember him anymore this one is a store and forward design and the model here was file transfers each thing that was sent was a file and it was treated independently and separately and managed independently and separately the delay and disruption tolerant networking protocols codify that in the form of bundles which have variable lengths on board the well you remember in 1997 the Pathfinder landed on Mars it wasn't a very big object but it was a pretty big step for for all of us that uses point-to-point radio links director earth for command and control when the Rovers landed the ones on the lower left here in 2004 the original plan was to go direct to earth and when they turned and the radios would be running at 28.5 kilobits a second because the scientists were all you know kind of grumpy about 25 and a half kilobits a second and we turned them on and there was a radio an overheating problem as I understand it so they had to turn them off and reduce the duty cycle which made scientists even more grumpy so somebody noticed that there was an x-band radio onboard the rover and an x-band similar x-man radio on the orbiters now the orbiters had been used for mapping the surface of Mars in order to figure out where the rover should go but that mission was completed so the interesting possibility now is to reprogram the orbiters and the route and the Rovers to use the x-band radio to squirt information up to the orbiter as it came overhead and hold on to the data so they got to the right place in its orbit to escort the data back to earth to the Deep Space Network these are big 70-meter dishes and three places around Earth and that system could operate at 128 kilobits a second first of all the x-band radio couldn't go all the way back to earth but it could easily get to the orbiters and the orbiters were outside of the atmosphere of Mars they had bigger solar panels they could run at a higher signal-to-noise ratio so they could run at 128 kilobits my understanding is that they may even have increased that data rate now to between 256 and 512 kilobits a second so all the data coming back from Mars is store-and-forward this is like a little three you know to hop three node network but that demonstrated the utility of storm forward networking and when the Phoenix lander arrived in the upper right in May of 2008 at the North Pole there was no configuration that allowed a director path so they used the store and forward method again with the CF DP protocols and now today as you know there's a modern science lander that landed on August 5th and it is also using these store and forward protocols in order to get back to earth so what we are anticipating now is a scenario in which we standardize on the the what in this case we proposed standardizing the bundle protocol which codifies all this you can just think about it as IP for space that's really functionally what it's like we're working with the consultative committee on space data systems which is a UN based organization that standardizes protocols for space communication we hope if they adopt these protocols that the scenario will look something like this as we launch new missions for scientific reasons and if those spacecraft contain the bundle protocols then when the missions are complete they can be repurposed as nodes of an interplanetary backbone and that is what we hope will happen over a period of decades or the rest of the 21st century now the story doesn't end there because there's a reason to want to have that interplanetary backbone here's why it turns out that DARPA has recently funded a study to design a spacecraft to get from here to the nearest star in a hundred elapsed years so this is called the 100 year spaceship now there are some problems associated with getting from here to the Alpha Centauri system the first problem is it's 4.4 light-years away and the current propulsion systems would take 65 thousand years to get there that's a little long even for a DARPA project so so we're going to have to find new propulsion systems that can last for a hundred years time and can propel us up to about 20 percent the speed of light and by the way when we get to the halfway mark at 50 years we want to slow down again because otherwise we'll get one picture as we go through the you know really expensive photos so so we have to slow down again and get into orbit around the the Suns there by the way there has been a planet discovered in the system and so there is something to look at besides just taking pictures of the Sun now that's so the propulsion is one problem another problem is navigation and the you know the stars aren't where they look like they are because of the speed of light propagation problem so as you're going towards alpha centauri you have to do mid-course correction but imagine you're one light year out and we're back here trying to tell it something it takes a year for the signal to get there and a year to find out whether or not the signal got there successfully and did it do the right thing so this back and forth just doesn't work very well the consequence of that is you need autonomous navigation and I was really nervous about that but I was told by the astronomers that we actually have a pretty good idea of the proper motion of the stars that are nearby within say 10 langurs so the navigation problem autonomous navigation may not be too bad but then there's this other problem which is how the heck do you generate is signally you can detect from 4 light-years away and several ideas have have come along one of them is to use a femtosecond laser think for just a minute you know maybe you have a hundred watts of power compress a hundred Watts down to 10 to the minus 15 seconds in a burst that's a pretty big pink and although the beam will dissipate you may still be able to detect it which by the way now explains why I need an interplanetary backbone I need a synthetic aperture receiver the size of the solar system in order to detect the signal and integrate it somebody has suggested that there might be another thing that we could do you all understand the gravity bends light that's what lenses do you've heard of gravity lenses if you get 550 au out from the Sun you are in the in the place where the gravity lens is beginning to focus and from 550 to a thousand au lout we've never been out that far but if we could get out that far and put spacecraft in the right place we could use the Sun's gravity as a lens in order to focus the signal coming back from Alpha Centauri and of course you might even consider putting equipment in Alpha Centauri that does the same thing to use the lens as a an amplifier so this is a study it's the paper study nobody's building anything yet but I can tell you that when you participate in projects like this is like living in a science fiction story and in the end that's what engineering is all about it's turning science fiction into reality and that's what we do thank you well that was fun it says I got four minutes and 36 seconds left maybe I have time for one question I don't know if I have time for one answer but I'm willing to try or you can let me escape and try it I have to actually I'm on my way up to JPL this morning to meet with the director and the advisory committee that meets two or three times a year to look at the next kinds of projects that JPL will be proposing so I appreciate it if you allow me to escape but I see there is somebody in a microphone so let's give it a shot let's give it a shot is this thing working okay the reason I'm doing this is not because I'm coming to spit on you it's because I'm hearing impaired and it makes it easier for me to lip read in case I have to so don't hide behind the microphone name's Patrick Abel MIT Lincoln Laboratory I was curious of when you mentioned earlier about the ITU getting into the content business regulating spam so on and so forth and what the implications are for countries that go forth and do that like as we have here and in your thoughts about regulation in general should we stay hands-off or are there quote/unquote good regulations that that makes us that's a fair question um see I realize I'm now speaking from my backside to people over there laughs the old fart is still fighting right this is slightly better thank you I don't mean to just walk away from you be like oh that's a good question so the short story is that these are real problems I mean we cannot ignore spam we cannot ignore viruses and worms and all this stuff so we really have to do something about it but I'm not sure that international regulation is the right tactic a lot of the response tends to have to be local but some of it may require international agreements so I'm actually not opposed to having international agreements about things that we collectively consider to be unacceptable behavior spam might be a good example of that viruses and worms and other things I think we will need international agreements in order to empower countries on a bilateral or multilateral basis to act together but I want to emphasize something very important about Internet and its character and what we've learned about creating the system like this which is survive this amount of time in a very resilient way informality is sometimes your friend when the configure worm showed up an informal working group made up of a number of different organizations that did not have any formal relations participated and continued to participate the informality allowed that to happen and IETF is another good example you can't join the IETF there's no place to sign up you have to show up and that's it you show up you speak your piece people who agree with you will support your ideas and people who don't want but the whole point here is that some informality may in turn out to be the best tactic I'm sorry that's all the time I am but it's a good good person good question thank you for got it
Info
Channel: USENIX
Views: 15,416
Rating: 4.9080458 out of 5
Keywords: USENIX, LISA12, IPv6, Internet (Computing Platform)
Id: hIISiYs7lDo
Channel Id: undefined
Length: 86min 48sec (5208 seconds)
Published: Tue Mar 19 2013
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.