Let's #TalkConcurrency Panel Discussion with Sir Tony Hoare, Joe Armstrong, and Carl Hewitt

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
concurrent programming has been around for decades you know concurrency is when you're multiple events your code snippets or programs are and you're perceived to be executing at the same time unlike imperative languages in which using your routines or object-oriented languages which use objects concurrency rented languages use processes actors agents as the main building blocks and yeah one of these foundations is concurrency foundations have remained the same and stable the problems we're solving today in the computer science world have changed a lot you know compared to when you know these these concepts were originally put together in the 70s and 80s you know back then there was no IOT there was no web there were no mess with multi-user online games video streaming and automated your trading or online transactions and you know the Internet has changed it all and in doing so with these changes it has helped propel concurrency into kind of future mainstream languages and you know today we're very fortunate to have your professor Tony Hoare professor called Hewitt and dr. Joe Armstrong kind of three of the visionaries who in the 70s and 80s helped lay the foundations to the most widely spread concurrency models as we know them today so you know welcome and you know thank you for being here thank you the first question I'd like to ask is what problems were you trying to solve when you know you created actors in your concurrent sequential processes and the other line style concurrency respectively I think the biggest thing that we had was we had some early success with planner right and there was this capability systems running around right and there was functional programming running around and the most important realization we came to was that logic programming and functional programming couldn't do the kind of concurrency that needed to be done right I don't know it so this visited but at the same time we realized that was possible to unify all these things together so that the functional programs and the logic programs at all these digital things were special cases of just one concept for modeling digital computation you can get by with just one fundamental concept and that was the really thing and of course we thought well we'll just you know there's plenty of parallels and out there they're all these machines will make this work and the hardware hardware just wasn't there at the time and the software wasn't there at the time so it's just taken but now we're moving into a realm of having tens of thousands of cores on one chip and these aren't wimpy GPU cores these are the real things with extremely low latencies among them so we'll be able to achieve Layton sees between masters passing messages in the order of 10 nanoseconds with good engineering and we're going to need that for the new class of applications that we're going to be doing which is scalable intelligent systems there's now this enormous technology race on what inspired CSP was the promise of the microprocessor my crisis were then fairly small they all had rather small stores they weren't connected to each other but people were talking about connecting large numbers of micro processors mainly for in order to get the requisite speed and I based CSP design on what would be efficient and and controllable and reliable programming for distributed systems of that kind so that's the a basic justification for concentrating on the process that didn't share memory with other processors which is certainly makes the programming a great deal simpler the problem at that time was the cost of connecting the processors together the cost and the overhead the devices for doing this were based quite often on buffered communication which involves local memory management at each node and I knew that as soon as you had to call a software item to perform a communication that the overhead would just escalate as people thought of new and clever things as people always do with software don't they so I I wanted the hardware instruction for output and for input to be built into the machine code in which the individual components were programmed now a measure of the success of the transputer in which with the efforts of David May was implemented some years later 85 as opposed to 78 the the he got the overhead for communication so low that if you wanted to program an assignment even you could program program it by walking another process a process which performs an output of the value to be assigned and another process for inputting the value that is intended to be assigned use communication for that and then join the two processes again all within a factor of 10 10 to 20 ordinary instruction cycles which was way way above anything that any other hardware system could touch because these the communication was synchronized it was possible to do it at the hardware level was another reason for further pursuing the synchronized communication and that was I was studying the formal semantics of the language by describing how the traces of execution of each individual process were interleaved if you have synchronized communication they behave like a sort of zip faster where each zip links in with a single zap and the Train of synchronizations form a clear sequins with interleaving only occurring in the gaps between the synchronized communications so a combination of practice and theory seem to converge on making synchronized communication the standard of course I realized you very often need buffered communication but that isn't very difficult to implement on a very low overhead communication basis by just setting up a finite buffer in the as a as a process in the memory of the computer which mediates between the outputting process and the inputting process okay so you picked a synchronous message passing because it was fast enough and it solved the problem Rost enough it was as fast as it could possibly I'm talking about what is now ten and a second yes that's the sort of speed you need and right built right into the song exactly and not only but the solution was much simpler yes you have yeah the solution was sooner which is Joe what about you started for a different angle I think I wanted to build photos and pretty soon I realized that you you can't make a fault-tolerant system computer because I think in the entire computer my crashes so I needed lots of independent computers I died regular CSP but can play with the transputer and I thought this is great how does it work in it in a context where the message passing is not internal it's too remote though I didn't want it to be remote in case the thing crashed and I couldn't get this synchronous as a physicist and I'm thinking messages take time propagate through space and there's no guarantee it gets there and if you send a message to something else and it doesn't come back anything you don't know if the communication is broken or if the computer is broken and even if it gets there and the computer receives it the computer might sort of not do anything with it so you really can't trust anything basically yeah so so really I just want I want to model what's going on in the real world and I'm thinking you know this the key to look at them this observational equivalence sort of struck through is this I thought of the most important principle in computer science I mean basically we've got black boxes that communicate when we shouldn't care what programming names they're written in provided they obey the protocol so I thought this it was central that we wrote down the protocol but because we couldn't formally prove it in the sense of you would want to do it inside one system I thought well we've got to just dynamically check everything so we need to build a world where there are parallel processes communicating from message passing and I thought they cannot have shared memory because if they have shared memory and the remote billion of crash if you don't want dangling pointers that you can't dereferences so that was the sort of guiding but I didn't know about the active stuff at the time and I don't know how how else can you build systems that's right and we are four people we've we've got our state machines and we're sending messages to each other I mean we don't actually know if the message has been received you know you've got to nod it Megan that's right and so I thought because I used to be a physicist and I'm thinking you know the program and the data have got to be at the same point in space-time for a computation to occur I thought why are people just moving the data in all the programs you know we could move both of them to something to me to repoint in the middle and I think you know in part of the system we can usually strong how to present their lockstep in it's very beautiful and in other parts of the system we can't so it seems to me a a mix between mathematics and engineering you know the mathematics can be applied to part of the system and best engineering practice can be can be applied to other parts of us at them another kind of delicate balance between two so I was pursuing the engineering aspects of this and just trying to make it language to make it relatively easy to do this and I thought we're treading on a sort of minefield there is something funny it's a terribly complex software like like leadership election and terribly complicated and then little bits that are terribly easy and then it struck me about this range that there was this paradox and the things that are terribly simple in sequential language is impossible the way around fix it a terribly simple name you can contact it impossible integration languages I agree with you completely about the central importance of buffered messaging and indeed that it has some nice mathematical properties that the synchronized messaging doesn't have yes but the synchronized message paradigm really has another reason and that is to create the input and output as a single atomic action yes and I think atomic actions in the sense of actors and instead of entry that's two other so there's this that was the other thing that mister is because we were thinking well if you want to be kind of like that the thing that was done in the old sequential computation by Turing in church they did there was a kind of universal model right they kind of nailed that thing and so we wanted to do the same thing for concurrency so we thought well the only possible way to do that is to base it on physics right because no matter what they do you can't get around physics so so so this put constraints on and also we wanted to be distributed right so we thought well okay if yeah and and also we wanted to be multi-core so that means you know if it's destroy the IOT that there's no in-between place to buffer them right sandy it when it when the message leaves here before arrives there right we can't synchronize these these IOT things so it has to be the fundamental communication primitive has to be asynchronous and unbuffered yes and if you if you operate if you want to have a buffer that's just another actor right and you do puts and gets on you on your buffer right and sure you look at the actual physics the electronics it's all local and you know what you have a 10 nanosecond communication time on chip and you don't take advantage of it by doing synchronized communication then your your overhead is good you can't use it for everything you don't have it let's see that both of them are necessary which is fundamental let us postpone discussion right but you see we don't have tennis at nanosecond we it's only average and some in some cases ok it's gonna take us a long time to get a message across a chip but we have to through compactify and garbage collectors and get the locality and so on the average it's only 10 not a second yeah but but you know what a core on one side of the chip sends a message to the other side of the chip ok goes to this fantastically complicated interconnect it's like the internet on a chip ok between these two cores right so again the way that they build these things there is no buffer right you you assemble a message in one core and you give it to the communication system and it appears on the other side and there's no buffer right I didn't really want two different ways to program I mean that's right I've got a process in Cambridge that's talking to one in Sanford yeah it's messages and I write it this way I have to send and receive me I have to allow for failure the message might not get through and something if they collapse this program onto a single processor with a boat in the same place I don't want to change the way I program I want to write exactly that is right I want two different mental models you know I can use them your text doesn't know I could use these fancy things but I don't want to like but the Goron the other sentence sharing where I can see the now with a GU but having 10,000 cores on the chip the core on the other side of the chip might as well be ready it's distributed program I mean on a single chip yes so you you're like oh I think it's gonna gems that as well when you have yeah well you can build the buffered communication with a 10-minute nanosecond average delay I will come round to your point of view oh we can do that average the trick is the average in some cases it's gonna be you know it could be seconds better be very few of them that's why I think the allowing idea of distinguishing local from remote is so important and they both exist which is fundamental I'm not going to argue about if you take remote is fundamental well you're welcome to be my my transaction it the things that happen locally really do happen in a way that no other agent in the system can detect any state in which any time or state in which some of the actions have happened itself is not I'm absolutely really inside an actor it's not visible to the other end yeah that's like a function call is that is a really a function call really is sort of like a bat box you yes you're sending a message into this thing that doesn't and you get a return value only it's got different semantics because exactly exactly once is trivial I mean that's how it works yeah but exactly once doesn't work in a distributed system and if at most one so at least one yeah so it's kind of kind of funny that this is different local demand I think I think that I'll bring in one of my other buzzwords abstraction modern systems are built out of layers of class declarations and the class declaration can itself be used called by classes higher up up the hierarchy of abstraction and what the method calls to a lower class are treated theoretically theoretically there may not might be implemented in the same way as transactions when reasoning in the higher-level class and they will be implemented by method bodies which are far from atomic in the lower class so each each class has an appropriate level of granularity at which it regards certain groupings of the actions as being inseparable in time and at the same time it produces non-atomic things which are just method bodies which simulate the atomicity at the higher levels now the simulation can be very good because the one restriction about disjointness that i would like to preserve is that each each level doesn't share anything with the levels above and below which i think be practical programmers you friends you could get exactly would regard as a reasonable thing of declaring it made it's correctly endearing an abstraction yeah I entirely agree lubbers but the customer interest me is what happens when the system becomes huge because if you've got this little tight system you could prove anything you want I mean that's really may or may not be able to prove things about big butt but the idea of the real payoff is real payoff comes when there's really this big oh no bigger the Bertos yes changing more rapidly that you could prove its properties imagine a magical reason why you could imagine it's always inconsistent I don't have to imagine these things did you know the horse holds great saying about inside every large program there's a small program trying to get out but you never find it because they didn't put it there at the very top level of abstraction everything you will have a very powerful atomic actions and you can write a small program which describes at a large scale what a very large leaf everything was this is a thing that really scares me is people developing large applications that they don't understand yes and then they get so complex and put it inside the black box you see you it layers so you end up with the gigabytes of stuff who talked about stealing it what is the part of that program that changes most frequently the top layers change they can change they have interfaces yes well-defined interface but the intelligence systems don't work that way they're not like operating systems okay these ontology --zz have a massive amount of information so the layering doesn't work anymore for these massive ontology x' right and they're just chock-full of inconsistencies right is there anything forgotten which should be known or anything which you feel has been overshadowed which is is important I think you know one maybe one or two key points and I haven't lectures that go on for hours yes I wish we had hours I think we think we've forgotten that things can be small this way of decomposing systems into small things I can reason about me this belief that things have to be big I mean with their weight if you're looking if you look at web pages and something like that they will download 200 kilobytes of compressed JavaScript to do what we don't have - probably but but I mean like you want to make create a button or something somebody will somebody will include a CSS framework of 200 kilobytes to make a button and that's two nights of JavaScript here so we've forgotten that things should be small with we've forgotten an Interpol spaces we've forgotten I the texture I keep the vet tech Nelson and then Xanadu and the idea is there and we've forgotten we forgotten that hypertext should be the link should be two directional not one directional scale so now it becomes interesting and anything from CSP which you you feel has been omitted or forgotten which would help us today if there is I'm friend I forgotten it but I do think this a new factor which on hope will become more significant has been becoming more significant and that is tooling for program construction a good set of tools which really supports the abstraction yes I'm talking about yes and enables you to deliver them to design and implement the top layers first by simulation of course of the lower lower lower layers it's this sort of stub but it will actually encourage programmers to design things by talking about his major components first and the second thing is that the tools must extend well into the testing phase as you will know large programs these days that subject to changes daily yes every one of those changes has to be consistent with what's gone before yes and correct not introduce any new strange behaviors yes I use a Fitbit changes ever changes are just extraordinary why do they have to change the software once a day exercise once you've got to upgrade your operating system and then they say well it's because of security things so I don't really have much confidence in them if they said we have to change it once every 20 years I could believe the newest yeah but but telling me that I have to change it once every six weeks awesome if we don't use a need as an overall development and delivery system in which you can reliably deliver small changes large programs a very frequent basis and that without breaking everything without breaking everything I well I think I mean I have it a rather unpleasant dream I think it is that's but you'll get your customers to do the testing the beta testing has always been used this technique well you said you would you set up a sandbox in which you deliver the new functionality and if it fails in any way you go back to the old functionality you treat it as a large-scale transaction you go back and you run the old functionality for that customer please Kelly and you report it and that report gets straight to the person who is responsible for that piece of code in the form of a failed trace yep why don't they do that but that is definitely not very easy long sit record fly streamlining the screens were not really powerful enough to do a large-scale trace in the case of K of concurrency you mustn't use logs there's logs of individual threads it's a pain to correlate when you do get communication so you've got to use in fact a causality diagram an arbitrary network and the software to to manipulate those on large scale I think will take some time to develop even of concurrency yeah you need you need to be able to extract kind of the linear execution of your program you know from closest to process and it's something which yet we're there is work being done on it I say you have to analyze the causal structure yeah but not the linear I think the best room I think that we've forgotten which we knew in the early days of intelligent systems is that these systems are to be isomorphic with large human organizations like these these complex intelligent systems are gonna run on very much say the principles that Stanford runs and you say well where what are the specifications for Stanford University well we don't we have principles right and we have ethics and we have guidelines okay but you really don't have formal specifications right and so anything that you think is gonna work in programs okay for large programs that wouldn't work for it so you know something like Stanford University it's not gonna work because they're they're basically isomorphic and so therefore I think that you know what we do is we do keep logs in for these things Stanford keeps records of all kinds and that's that's so that if something goes wrong we can look back and try to see how we can do better in the future and also to assess accountability and responsibility and so that is the fundamental thing is that that's that's going to be the structure of these large-scale information systems that we are constructing my kind of key points I'm taking home here are simplicity where you need to have small programs well programs which have become complex but where the units are small and it kind of makes sense to see a process an actor or an agent as maybe one of the building blocks which it's small its containable its manageable the second point is I think the importance of no shared memory correct me if I'm wrong and this notion memory approach then brings us into both distribution and scalability and multi-core those are the key points I'm thinking oh yeah I think one of the things we've forgotten this is the importance of protocols not describing them accurately we we we build a load of systems but we don't write down what the protocols between them all on them and we don't describe the ordering of messages and things like that so you would think that it would be easy to reverse-engineer a client-server application just by looking and the messages but you can trace the messages and then you say so so where's the specification that says what what the order should be and there is no specification and then you have to guess and can use finite state machines specifying these things a CSP would but people can still so from our C's it pretty good cuz I think I would have a small concession to make to sharing you're allowed to share something between two processes at most of this example is a communication channel what's the use of that if you can't share it between the output or in the inverter now if you have a hierarchical structure like I've been describing the behavior of a shared object is programmed as a process inside the lower-level class so that even if you only used the same programming language hi you it's a highly non-deterministic structure which in a different context I used to call a monitor which accepts procedure calls accepts communications from all sides but everybody who uses it has to register and has to conform to an interface protocol which governs the sequence in which the which makes the sharing safe in a sense which is important at the higher level and implemented in the in the lower level yes so the fundamental source of indeterminacy in these systems you have all these zillions of actors sending messages is the order in which the messages are received that's where the arbitration occurs in the system and so if you have some for example a Tony's a and there's work that done by Tony that's here on our readers writers scheduler okay you've got these these read messages and write messages coming in from the great world out there and you're sitting here defending this database you're scheduling it so that their note that's never the case then there are two writers in the database and there's never the case okay that there's a reader and a writer so you're sitting here taking these requests from all comers you don't know who's going to be want to read and write in this database and your scheduling of it so you have your own internal variables and then he must be kept very private in the number of readers and the number of writers you've got in the database for example but so the indeterminacy is in these messages that are coming in from the outside world which you are then scheduling for the database and that is the funda Lin irrevocable source of the indeterminacy yes I agree and that's why it's built into CSP the fundamental choice struction allows you to wait for the first of two messages to arrive cut it down to - sorry - doesn't scale but bear with me for a bit and the great advantage of this is is that if you have to order the receipt of these two messages you will double the waiting time if you went for two things at the same time you wait twice as fast well I mean but there's just one art and here's the actors for these two messages come in you you take them in the order in which they're received and if you want to process them in a different order inside the idea is you don't want to have a queue of things waiting inside of the ideal that you don't want to have that you want to take everything that comes inside so the thing you can then properly schedule the order in which you process it it's like your mail you take the mail as it comes in you may not want to pay to pay the first bayit bill that comes in and deal process it later but you take it as it comes in because that's much more efficient so what Ellen does is it's every practice just got a mailbox incoming messages just end up in the mailbox in order and and the program gets an interrupt to say hey there's something remember and then it can do what the hell it likes you just split so now I want to take that one I don't take that one out but I'm gonna go to sleep again but that's that's an excessive amount of overhead okay yeah but it makes the program a lot easier why not I don't think so because because I think you can you can program it much more easily if you take it you take it all inside as it arrives and don't have this sabra then you're able out there yeah but then you have an impulse a mistake machine it's not so bad for readers writers it depends on the program you sold depends very much why is concurrency scale today still done with legacy languages that you know concurrency bolted on as an afterthought I think you know concurrency needs to be designed into the language from scratch it's very very hard you know they write the framework and bolt it on you know what do we need to do to change this so Darwinism well survival of the 50s there's this often but it's it's it's a it's a new project okay like the moon project or heaven forbid the Manhattan Project or the icon project that enables new things to be brought in because otherwise because capitalism is basically a very incremental hill-climbing process the most sensible financial thing for capitalists to do is to bolt something on because you get the most you get the most rapid buck for the least investment in the short term but then you end up with monsters like you know C++ and things like that right but if you just keep pursuing that print that path so I think that that because we're now engaged in this giant giant race among these nations to create these scalable intelligent systems and they're going to creating these large projects to do that there is some opportunity now for innovation because that's not the standard hillclimb you know I think I think I think sort of hardware changes precede software I think if you if you if you keep the hardware the same you get a sort of s-shaped curve yeah rapid developer in the beginning then you get up the top and then nothing much happens and then a new hardware come from and suddenly there's a lot of change so err Lang is millions and millions of times faster than it was but that's due to clock speeds well I'm not going up no no no they are we aren't they yeah and then they stopped into that we're faced when our faced with two fundamental revolutions having thousands of powerful cores on one chip yes and also having all these IOT devices well those are too few to huge hardware changes I always thought that gigabyte memories and doesn't mean I do petabyte memories when they come to me like an atomic bomb they are just if you imagine the combination of petabyte memories with will with life i and communication that Kevin's a Giga bits per second but the combination and and like 10,000 Cray Warner's in a little thing like your fingernail everywhere in every single light bulb that's like an atomic bomb hitting software and what and that's what we're gonna do with it nobody's got a clue well that's the thing is a stacked carbon nanotube chips that they're working on now are gonna give us these thousands of cores on a Chow and also they make the memory of the same stuff they make the processor out of so it's different from now we may keep the D Rams out of different stuff that we make the processes for so we can't combine them I was completely blown away a couple of weeks ago I saw saw a newspaper article about farm BOTS and and suddenly this company made three little robots one was the seed planting robot tiny little thing yeah go around not to see and then it was a watering robot sort of walked around and lived at the seeds and then there was the reading robot that a pair of scissors on the bottom they went on slipping the seed but suddenly this realization that you know farming could we we could watch every single seed energy to do so I was claiming was was like 5% of the energy of ploughing it you know using a plough it's terribly inefficient use it so when we got computing at this scale we can tackle traditional problems in completely different atmosphere and we have to do that for the benefit mankind not to build things to feed your cat when you're out to improve the efficiency of farming and things like that it's amazing and what you didn't know is that the forum button is actually powered by earth underground in your garden that was great and I think you know to count so you know I think there are lot of claims about kind of the future of concurrent programming languages some people claim you're that there will be a lot of features taken from functional programming languages I think you know the first kind of feature which comes to mind is immutability the essential thing about the actors is they change they get all their power of the concurrency is because they change right yes now the messages they send between each other are immutable because they have to exist is full contracted and there's no way to change the photon in the root right so by definition the messages are immutable but the actors have to change they get all their power modularly from over the functional programming is because they do change they change a lot which the functional program you can't do right yes it's okay and which can change its own data yeah they change all that that's right yeah as our friends say a change comes from within yes exactly you can't change me but you could send me a message so we can change myself there and it's a form of isolation and I mean I think these are kind of ideas which you know come from functional programming but they've also been heavily influenced from our programming I think you know lnk objects objects don't share memory and objects communicate with message passing is why you should mention Christianly garden not only on doll for that yes I think this is the crucial argument if you're writing programs that interact with the real world you've got to construct a model of the real world inside the computer as it's done I mean just universally in design of real-time systems and the real world has things called objects and the objects do sit in a certain place more or less they can move around the movement of objects the existence of objects the sequentiality of the actions performed by the same object these are our features of the real world doesn't the objects change and so functional programming just doesn't address the real world entry I think that I think functional programs are wonderful I really admire funk yeah and if I had my choice I'd always use functional programming I mean it is it right you can't do the readers writers scheduling as a functional program it just makes no sense it can't do it because it the scheduler has to be continually changing its internal state as the read and the write messages come in right it's got to be buffering up the read guys and buffering up the right guys right and letting the cell read guys so it's just always changing and you can't do that I totally I mean Alan Kay said the big the big thing about object-oriented programming was the messages and it was a messaging structure that was important bit you know that was what had been lost and and of course then the next thing comes okay we've got more immutable messages which I totally agree with and then we need some kind of notation to write down the sequence of have messages which we've got in CSP and which people seem to ignore I mean yeah state machine in CSP describing the allowed sequencing of messages the other thing about the actor model was to minimize sequentiality as much as possible sequentiality is evil right so you have you have arbitration in front of the actor in terms of the order in which it's going to take the messages in because that's irreducible but as soon as that an actor takes a message in it wants to run everything inside of itself in a parallel to the extent that you can that is its goal the maximum amount of internal parallelism inside an actor can the solution with share states be made robust and safe no you mean you mean no you mean the shared memory in which you do assignments loads and stores so I know so and the second question is your kind of solution which communicates with measured passing be made fast yes yes but only the right kind of processors and then that respect Tony was a pioneer with a transputer I'm realizing that to order to do this at speed you have to have so that the hardware that is suitable and the hardware previously was not and we're gonna have to do that again the RISC processor is not suitable we have to do better than I mean what is the implication you know for the future of software development I think to capture the test for capturing the essence of concurrency is that you can use the same language for design of hardware and software because the interface between those will come through it and you've got to do some design we've got to have a hierarchical design philosophy in which you can program each individual ten nanoseconds at the same time as you program over a 10 year time scale yep and sequentiality and concurrency enter into at both those scales so bridging the scale of of granularity and and of time and space is what every application has to do the language can help them do that and there's a real criterion to designing the language besides semicolon hurts performance because you have to you know finish up the thing that's before the semicolon before you can start the thing after the semicolon and the ideal concurrent program has no semicolons no I think I for made with very few programming no no no it has to do the state point but it has to have these macro state change things like in queueing and D hewing okay and allowing guys and queues to proceed okay these macro things that that's something you don't have to spray your your program full of semicolon like to start the some languages it could be I played a strand which was highly concurrent in and day off and it was terrible because there the problem with the opposite if you created too many parallel roses it's so suddenly rather surprising this tiny thing it created six million parallel processes to do virtually nothing much then your problem we have a wonderful way of controlling concurrency so if you've got a concurrency problem try and make it more sequential anyway I will set forward my religion which is that if you have programmed or unprogrammed components there are two ways of composing them one sequentially which requires that all the causal chains go forward from one to the other and never backwards mm-hmm and the other in which the causal chains can go between both operands and you have to be worried about dead and you have to you have to tell the programmer that he's the person who has to worry about deadlocks some actually I think we solved the deadbolt lock problem by the following mechanism whenever an actor sends a request to another actor the system says ok we're keeping statistics on what's going on so we don't get a response back within the certain number of standard deviations then the person that the program that issued the request is thrown an exception to long it took too long right now you can you can try again but a program will never deadlock right and it will always terminate oK you've already got the solution and in fact with Diplo is I talk to Francesco you know I've only been hit in the face by deadlock once or twice in 30 years yeah because we use that stress mechanism or me on the other hand you do have the nasty problem of you know the message doesn't come back within this time and then the time comes and then it comes just after that oh then you tell the guy who's sent it but yeah you tell youtell that's another tricky problem than it is that's right here that's right well the problem but the problem was solved in the same way in the transputer language hongcun which every time you you waited for something you could specify a time limit yeah so it's pretty responsabilities put on the programmer to manage a deadlock in that way I always deprecated that way of melting down logs but I think it's going to be inevitable anyway yeah but I remember with Ockham's the abstractions were great but the transputer yeah didn't do fair share dueling so it kind of when when you're waiting for things some of the things sort of lower down right well fairly shared you oneself and you don't want to put the burden wasn't a programmer to specify the amount of time everything around you should say it's like you didn't want to put the programmer on the business of doing the garbage collection with fries I mean you want the system to handle hit automatically right there for you keeping the statistics and the number of standard deviations there's Colton in the past of course that you said about timeouts technique with I gave her talk about telling and you really listen and you had one question when you is Easter how do you choose the value for the time out oh yes you would immediately hit on the key now the answer is don't put any burden like them with automatic garbage collection you put the burden on the system to keep your statistics are thoroughly at the level of abstraction hierarchy which you are now living you choose of organisms people actually did it very well because they have two protocols so they have a remote procedure calls that are known to terminate very quickly and so you know so you send a message to something here immediate answer back and therefore it's okay to busy wait for that that's fine and the second one is you know that it's gonna take a long time since send an acknowledgment back and then you never you've got to wait a long time so the protocol designers I'd say which of these two cases well I think that it's very it's absolutely the right old any comms protocol the scenes out and you always have known as mutually honor the software users in the concurrent system you have a concept of a transaction you anatomic event which stretches across more than two components and that is I think a very important idea for which there are many implementations and therefore it I don't know people are reluctant to put into programming languages I think a remote procedure call could actually say you know I send your messages and the answer I should get backed up either immediately you know to get that one of two things either here's the answer or I'm going to give you the answer within ten seconds I think you should tell me how long you think it's gonna take well this is like transactions but some decent times built into welcome because if you just didn't mention anything sanctions have never been successful for distributed systems and now everything is a distributed system including what's going on in the chip so I have my doubts that transactions are going to be not a part of it I think of it in currency low level but but within it gets a message and tries to do that but even then it's got the problem that anything between any pair of instructions that can be blown away yeah so that's how you deep scale it's by any transactions are basically serialized through business or an actor and and then you need to however make sure that you've got the full tolerance around it in case it gets lose that actor that's right because actually they don't sue replay and that's done in a different layer so you're actually hiding the complexity away from the programmers even good clocks again as I should down to I mean say you your IDs ago probably synchronization about 100 nanoseconds yeah if you've got across a chip that's good enough no no but across the world you know whatever the granularity no time synchronous actually if you could trust that very very difficult levels are growing on you supernova installs and Google's pursuing a guitar month and now it's causing them tremendous amounts of problem I thought that they can rely on that global time synchronization and they find that they can't that there's a that there's a tail right and the time synchronization was coming off that tail causing a reliability problem so now they're going back to what Tony was talking about namely the causal model because message passing of like the semicolon also moves IRA's over reversibly forward in time and creates a chain and messages from here to there that is irreversible but I mean I used to work with astronomy and the astronomers can get clock synchronization down to about a nanosecond if you could propagate that out or that would be not the best you could do just accept that you have to let live at different levels of granularity yeah and that but you don't want to import all the problems of the lower levels every time you write a higher level thing higher level things tend to be slower because they're implemented in terms of lower level things and therefore the inefficiency of the implementation at the high levels which really met which is where the real application oriented actions happen relatively not quite so sensitive to overhead as below alone yeah I'm Italian and recently now that we have these IOT devices we have to have something since an answer actor is going to live above our might to have a distributed implementation but an actor then it's for that for a group of IOT devices like the IOT devices in your house right you need to group that as a new unit of abstraction you and your IOT devices right it's now a citadel right yeah and so we have to do that now currently we have firewalls which is just terrible right so we need now need a new level of synchronization a new level of security these Citadel's which protect a unit of devices IOT devices and people and front front from the internet from the bandits on the internet and they have to be grouped together and then within that that they use cryptographic protocols between the IOT devices so you know you really can't trust what's happening I was gonna say what do you think about you know distributed protocols where you deliberately slow everything down so for example in Bitcoin that's that the fundamental design thing says well we've got to propagate this to the entire world that will take ten seconds and therefore we have to slow every computation down so that it takes ten seconds faster there just lost the process we will make the computation more difficult x10 say there's if you're that is if your business model is to make things slower your competitor is going to beat you every time and I think this brings me to the next verse you know I think we've been talking a lot about developments you know in the last 20 years but at times it feels like you know in the industry we're taking two steps forward one step back you know what is the development I think which is most depressed you in the last 10-15 years depressed you made you sad made you angry you know quite proof-of-work out no no it's it's the mass surveillance that Snowden revealed right it's really being done surveillance has been done on a totally amazing level and the amount of information that the companies and the intelligence agencies are collecting on us is just astounding and the question is will they get everything because we're about to all be wearing holo glasses in ten years or so because the replacement for our cell phones and that they have a backdoor in the inner hollow glasses they see in here okay everything that you see here and do it what's absolutely terrifying prospect but you can't resist it you'll have to use them in your job right now I can't be functional in my life if I gave this up I would no longer be confident right I could know it coordinate with people I couldn't get my job done the same will be true of the holo glasses once they get them lightweight like the moon once it totally recommended and done but don't make you look like a bug-eyed monster current entertainment ones do right and that's happening because the companies in Silicon Valley have the prototypes and big companies will be shipping it in just a couple of years well will the same some level the interference in elections yeah referenda it's even more horrific yeah because it really is very easy now to buy votes yep and when this happened in the Roman Republic people got rich enough to buy verbs the Republic fell and Sweden a democracy I think the political implications are playful we'll be seeing your concurrency rented program in becoming mainstream oh it has it has it yeah that's right in if anything you see the default application the default system is going to become an intelligent system okay because now we have the we're gonna have the capability to do it and that just in order to get the response times down like the people doing the glasses like like you think that ah if you got to serve on the internet you think you're doing pretty good if you're giving a hundred and a second a hundred millisecond response time well the holiday is they laugh at 100 milliseconds right they talk about ten so that puts an enormous force on how good thing pass the thing has to perform and the only way to do it is with the concurrency yeah I think we're gonna go so that sort of structure the brain has if when I was looking at Ericsson and you look at how mobile phones are made I think you know they've got like a video codec and an audio core and the brains got this visual codecs and one it's got the audio visual part of the brain with specialized hardware for that yes and if you if you look at the sort of chips we build mountain there was a lot of confusion though a lot of different video codecs yeah and then somebody say wow this is the best codec and we'll build that in hardware and this is the best audio codecs and then the speech recognition these become a standard components you you bake them into a tiny little cheer wire it up with a lot of memory and very fast communications and then I I think the development of stops in until we get new versions of chips so we have neural network chips but I'm very very fast that that will change how we program so what we're talking about kind of centralized hierarchies let me take you know to decentralize models what are you Musa and blockchain and say soul in your so so sir team Tim berners-lee's a decentralized web and you know what do you see what role do you see concurrency playing involve our chains are very slow and they're easily hacked like for example in Bitcoin the Chinese bit miners owned them and retrieve the bitcoins and say can outvote anybody else I mean and so that so that won't work and the other thing is that we've learned that performance is enormous ly important okay and that you have competition and you have to have a business model to have any effect on the world so unless solid can compete in a business model and in performance then it won't matter even if it has great nice ideas like we was once thought I disagree with Joe that backed chains with a great idea but back James completely don't scale so it was absolutely necessary in order to have a scalable web is to use one way links and for example actor about actor addresses don't have back pointers right because it would just completely kill performance ones if I'm enacted there might be some popular actor and there might be millions of actors that have its address that could send it a message but that one guy can't be kept out held responsible for knowing everybody who has his address so the scalability has now become a crucial issue and that's a driving force for concurrency because concurrency is the only way to get the scale on the performance I think deployments a problem because even even if somebody made a an open-source triticeae application it needs 50 million users to take off so and and and the you know the Apple and Google and everybody have dominated this way of deploying something to hundreds of millions of people that's right and it's very difficult to break that's the first one to get a hundred million yeah it's - basically so you know it's very difficult you have to have a business model yes and so I think that it's for for the citadel's like you dream has the Citadel you got your the interest the business model again it's going to be advertising because how do you compete with free and so there's a business to be had between your Citadel ok matching your Citadel with with with merchants that want to sell to you matching you up with them there is a business there which is that which is basically some of the advertising business and so if somebody would build assited it all based on that then they can fund the whole thing out of out of advertising as Google does currently with a centralized model but the problem is that we have is how do you boot strap that how do you get a big player to make the conversion because it's completely scares them because it's contra to their current business model what I don't know is having music it's the asymmetry in knowledge so Google knows everything about us we know nothing about Google and I think when people start to realize that that asymmetry can be used for very little purposes and economic purposes that they will demand that's right maybe maybe something like AT&T was written why is it Google being spit-up well why doesn't the European Union have something like Google to exactly deploy it so they have toxic knowledge having access to our sensitive information okay in their servers is actually going to be very bad for them because once you know the people in in England realize okay that the Americans okay have all this intimate knowledge in their in their data centers of British citizens they realize that's a national security risk and that's for example why Huber was kicked out of China is the Chinese government didn't want to have a gaijin company to know about the travel habits of the citizens of Beijing so they bought them out right so so so so storing this sensitive information is actually toxic to these to these companies they just don't realize it yet because they're going to get big now they're being forced to store all the information in each country like you have to store the Chinese citizens information in China right and then you have to be domiciled in China which means you've just been broken up you can't be an international company and not only that if you've got the sensitive information in your data centers then all sudden you're the security service of your country you want to come and say look I want to have it right then they discover well they don't they just have the bits okay they want to have your tool chain right so if you're your Google or Microsoft right they the only way they can manage that is to use your tool chain so then they have this little building inside your honey right right and then that's a pain in the tail they have two companies that they have to get bits from so they want you to standardize your stack and then the company because it's got this sensitive information is becoming a prisoner of the government because now the government wants the information so yeah so we've gone from concurrency to resilience to scale to kind of social political well there is it and I rolled link together that's right together there's no doubt about it closing remarks maybe you start with Joe how would you sum up the future in one sentence I don't know I think I was imagine a historian you know two three hundred years time writing the history of this period and it would just be like the Dark Ages the ancients of confusion and how you know will it end with computer failures that kill millions of people or all it transition into something that is for the benefit of mankind we don't know at the moment I and I don't know how long it will before we know maybe more now in twenty years time or fifty years time but at the moment it's very confused I don't really have anything to say about the distant future but I would like to go back to the point by making a suggestion about security which is enforced by runtime checks the way that security is enforced at the moment is that the is by sandboxes if we extend the idea of abstraction downwards then we get the idea that you can specify a security protocol by interrupting the progress of the higher-level users and checking that they've conformed into the protocols in real time all the time so that conceptually we're reusing the same concept of layering yes so you can then have obviously Michael dungeons of security here yes where the you're digging underneath the program to check that it satisfying protocols which are believed by people to prove things will implement your desires as to what can and cannot happen so we're now embarked on the lot of the most complex engineering project that we have ever done and that is to build the technology stack for these scalable intelligent systems and the Chinese minister of science and said they think they can do it by 2025 and the only way to build them is to use massive concurrency that is the it gives you the the performance the modularity the viability in the security okay that you need and the big question is what they will they now be used for we want to use them for things like pain management which is a huge problem in the US right is to have pain management without opioid addiction and our solution is to use these scalable intelligence systems but that they could be used for other things they could actually become the basis of universal mass surveillance so we are at a turning point why why can't we use things don't scale as it seems very hard but the economics demanded if it's not scalable not not forbidding you from you're using scalable techniques but there's a require more Denari people who work no stata tula at most at two levels of abstraction and scale use the same concepts which are inappropriate to use at the highest levels oh well then this is of this technology stack for these things I mean they're as you say they're all these levels they're different abstractions etc these are complex beasts so this leaves above some food for fourth thank you so much for being part of this thank you for thank you thank you thank you
Info
Channel: Erlang Solutions
Views: 8,238
Rating: 4.9816513 out of 5
Keywords: #talkconcurrency, Actor models, Concurrency, Functional programming, Carl Hewitt, Joe Armstrong, Sir Tony Hoare, Erlang, Elixir
Id: 37wFVVVZlVU
Channel Id: undefined
Length: 66min 44sec (4004 seconds)
Published: Tue Feb 19 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.