Concurrency vs Parallelism | C# Interview Questions | Csharp Interview Questions and Answers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello welcome to corresponds YouTube channel and today's topic is around 4 vocabularies which are very confusing for IT developers the first one is concurrency the second one is parallelism third one is as synchronous and the last one is threats so all these four vocabularies they look very very similar and many times you know developers think that they are synonyms and that ends in two and bad design so this is like you know approximately on one hour of video and I've divided this video into two parts in the first part I will discuss concurrency versus parallelism and the second part I will talk about as synchronous versus thread and both of these videos I am uploading on this channel itself right so this is part 1 concurrency versus parallelism and in the same channel I will be uploading as synchronous versus threads and if you get a time if you get chance please go and visit my quest pond comm website you know where I explain complex I did things in a simplified way if you see around lot of developers have confusion around concurrency and parallelism and many developers think that concurrency and parallelism means one and the same thing so for many developers concurrency and parallelism means that executing multiple tasks at the same time now this definition is 50 percent right but 50 percent we need to add more to it so let us go in more detail concurrency means executing multiple tasks on the same core while parallelism means executing multiple tasks on multiple hardware's now this can be multiple cores they can be multiple machines or whatever so concurrency means you have to task you know but they execute on the same core for example you can see over here in this image we have this one core and we have two tasks here t1 and t2 so what the core does is the single core does is it actually gives some time to t1 and then it context switches that means it switches to t2 then executes some part of T 2 and again context switches to t1 well when you talk about parallelism they are executing on different hardware you know which can be separate course or separate machine so they just run parallel and just execute as fast as possible so we can also conclude that concurrency is a feel of parallelism while parallelism is real parallelism so in concurrency we just have time slicing time slicing means some time is given to t1 and then switched and then some time is given to t2 so basically here we have time slicing here there are context switches while in case of parallelism there are no context switches now the next question comes is do we need to get educated or be aware of these terms separately because if it is just about executing on single core or multiple core we would like to always execute on multiple core because if we execute parallely we are utilizing our course we are utilizing our resources properly we are having good good performance so why do we really need to worry about that we are we are running on a single core or we are running on multiple core why don't we just run on multiple course now we should be aware that that the that the goals of both concurrency and parallelism are very different and if you mix up these goals you will have a bad design or or design the goal of concurrency is to have a non-blocking application is to have an useable application your application should not hang so for example if you have something running at the background that should not really affect how the end user is using your application right so here it is more about to make your application usable but it is not about performance you you do not intend to make your application faster you just want to make an application usable so that is the that is the goal of concurrency the goal of parallelism is performance you just want to complete the task as fast as possible you want to utilize your hardware you want to go rocket fast so one is all about making your application usable non blocking it should not hang and the other is all about performance and if you mix both of them then you have an old design or you have a bad design so let me explain you so here is a simple sheet shop code I have and this simple shoe shop code out here is doing couple of things so it's in console application and in this assume that there is some task out there you know probably like downloading of a file which is happening so you can see that this first line of code here is orchestrating I am using that word very purposely orchestrating it is orchestrating a download of file one so you can see that I've used a task delay to just make the application wait here for 10 seconds again there is some file to which is getting downloaded so again that I have orchestrated by this star store delay and finally the end user can also go and input some data into the application so basically there are three things which is happening downloading of file one downloading of file two and then entering of data into the application so if I run this application you can see the first thing what really happens is the downloading of file one starts so there is a 10 second of delay the downloading of Phi 1 will start then downloading or Phi 2 will finish and afterwards you know the end user gets the data entry screen now tell me how good is this I mean like the end user has to wait for like 20 seconds to get the data entry screen so that makes the application not so user friendly that makes your application blocking so now you want to make your application more user friendly you want to make it usable you want to go and break down those downloading of files into separate computation units and you want to run them on the background but at this moment in your mind you do not have performance in your mind you do not think that your application has to perform better what you are thinking is that how to make this user friendly how to ensure that the end user gets a data entry screen as soon as possible rather than waiting for the file files to download so at this moment in your mind it is not that you want to run thing parallely on different hardware's and different machines and different cores but it is just to ensure that if you can run all these three tasks concurrently right so if in your mind somewhere you have that performance is not the criteria but more it is about usability it is more about not making your application blocking then it is concurrency what you have in your mind right so basically now what you will do out here is you will actually go and put all these three three things into separate units so one of the base things are things about concurrency is that you need to go and design how you think your concurrency will work so you need to go and divide your application into individual components or individual computation units you know which you want to run concurrently so at this moment you know I can think about three units I would like to go and run this download file one separately concurrently at the background I would like to run the downloading of file two concurrently and then the entering of the data can happen concurrently but in my mind I do not have at this moment this thought that I want to run this on a separate processor I am to run this on a separate computer I don't have that in my mind my mind in my mind I have that I want to make this application usable right so let us go ahead and put this into one method so I'm going to go and quickly refactor this and let's extract this into one method so that's you can see that I've just created a new method and let's go again extract this as well so this code also I would like to go and put into and separate method right so you can see I've created two methods out here method 1 and Method 2 so now what I would like to do is I would like to go and run this new method 1 concurrently I would like to run new method 2 new method concurrently new method 1 concurrently and this concurrently so at this moment what I have done is I have done some small software design at the at the background I have divided my application into three individual computation units and I want to run them parallel II and I want to ensure that the end user can use my application properly so now you can see that we have divided our application into individual logical units into individual computations but still you know they are running synchronously in other words first a new method will run the new method one if one will run so in other words even though if I run my application at this moment you will see that it is still doing the same behavior it is first waiting for ten seconds then again abating for ten seconds so we have achieved the first goal of recognizing those individual computation units but we are still not running them concurrently right so in C sharp you know if you want to run something concurrently or I'll say as synchronously I will come to this keyword as synchronously later on because I don't want to bombard you with lot of things so if I want to run this new method a new method one at the background you know in sheesha we have something called as a sinc keyword so we can go and we can put here a sink and we can say that wait for this task to finish you can see that you know I'm not trying to create threads here you know I have not said new thread I did not do that because I'm not in a mood to run parallely I do not want to run parallely performance is not my goal my goal is usability so you can see that how carefully I'm not using those thread X is equal to new thread I'm not using the thread syntax I'm not using TPL but I'm using a sink and a weight because my intention here is to run things at the background but I don't want to create lot of threads so I can say oh wait right so now let us see now if you see here and now let us go ahead and run this application so basically I have used a sink and a weight in case you do not have an idea of a sink and of it I would suggest you go and see this video you know where I've explained what exactly is this keyword a sink and a weight so I will go and run this now so the you can see now first thing is it's it's popping up the end-user screen saying that enter your name so I can go and enter my name out here and at the background the downloads of both of the files are happening so this is more usable application now right so you can see I did not run on multiple processors but I made my application more usable right and if you go and see internally for example if I go and see out here if I run this and if you go and just see debug windows and if you just see the threads what are running at the background you would see that there is only one thread which is running you can see the main thread so you can see that he has not created lot of threads it's just one thread which is actually doing time slicing so sometime he's giving to this sometime he's giving to this it is giving to the end user the feeling of concurrency the feeling of parallelism it is beating those does the psychological behavior of the end user but at the same time I'm not stressing out my resources I'm not stressing out my course so ask yourself one question that do you intend performance or do you intend usability non-blocking so if you are intending non-blocking concurrency if you're intending of performance it is parallelism now because you have done some good deeds out here you have divided your application into individual computation units you get some bonus here the bonus here is that you can take this same individual computation agents or individual computation units and you can run them in parallel in other words I can go out here and I can say something like this I can do something like this I can take these individual units for example this is new method I can go and I can run it on a separate thread or on a separate task so internet you know we have something called as a task parallel library which actually helps you to take a logic and run on parallel processors you know so it so you can see here I am just running these individual units now parallely right so here if you see at this moment I should have three threads one thread which will pitch is the main thread the another thread which will run this new method and the another thread which will run new method one so if I go and debug this you can you should see out here if you see you can see that there are two or three more worker threads out here you know why because at this moment I am thinking about parallelism so the bonus out here is that because you have divided these application into individual units you do get a bonus of running them parallel but please note that in your first place when you started designing concurrency you never thought about this so it is also possible that these individual computer units can be a bad choice to run parallely because in concurrency normally these individual computation units you know talk with each other directly they share common data they throw events to each other there is a communication which happens right and when you say you want to run something parallel E and if you are if you are having these two tasks communicating too much with each other if the if they are chatty then it is possible that you won't achieve the performance in parallelism when you talk about parallelism they should not talk with each other they should just run like a horse and not communicate with each other directly so every concurrent application is probably not a good choice to make it parallel for a parallel application you need to think differently you need to think isolated so this is just a bonus for this you did not make your application concurrent so please note it is possible it is possible that you can take upon concurrent application you can run it parallel e but also there is a small if and but here it is possible that if those tasks are very chatty with each other then the parallelism would not benefit into performance also one more important part about concurrency is that it is undetermined stick in other words when you are not a concurrent application first time you can get a different sequence of output and if you run the same concurrent application next time it is probable that you will get a different sequence of output why because your core is actually doing context switching so it is giving some time to one task it is giving some time to other task and then it is possible that depending on the mood of the core when I say mood of the core means depending on whatever is the situation the core can decide that he can give some time some more time to one task and less time to another task and it is possible that you will get a different sequence of output for example over here if you see in this code I am first calling new method so logically file one should download first then I am calling new method one so must be file two should download first right but it is possible let me run this it's now I have just increased the time to 20 seconds you know for various reasons I have increased to 20 seconds so it is possible that file - can finish first even though file one downloading started first but file two can finish first so let us wait and see what happens I've just changed the time inner because I just want to play around with the core and simulate that situation there it is you can see now can you see that file - has downloaded first and file one is downloaded later on so concurrent applications are unwritten mystic the final output is proper please note the final output what you expect is there but the the sequence of working can be different so that's one of the biggest characteristic of a concurrent application when you talk about parallel application it is deterministic why because they are sitting on a single core there they are using the hardware exclusively so you can determine you know what can you know how the output can be at the end right or you can have the same output again and again why because it is not chatting with lot of other tasks but here you know because the hardware is shared it is possible that you will have an unwritten mystic output so please keep that in mind as well so with all the demos and all the theory what we discussed you know let us put a conclusion sheet so when you talk about the basic definition both of them mean executing multiple tasks but in case of concurrency it means multi executing multiple tasks on the same core while in case of parallelism it means executing multiple tasks on different hardware which can be different cores or which can be different machines so in other words you know concurrency is a feel of parallelism it uses the fundamental of time slicing it does context switches when you talk about the goal of concurrency it is about making a program usable it is about making a program non blocking so that the end user can feel good about your program right while in case of parallelism it is about performance so you just want to increase the performance x times when you talk about concurrency the most important part is that how do you divide your application into individual computation units how do they talk with each other right that's the most important part in case of concurrency while in case of parallelism it is about just having parallel units and executing them on different Hardware so the perspective of concurrency is more about software design it is more about composing those individual units somebody will stop somebody will send the event the main thread will be non-blocking how they communicate with each other while parallelism is all about executing on X hardware so you have four cores you have four tasks you execute panel E so here the gear the perspective is more hardware and definitely because here we are talking about more metal it is heavy and this is lightweight because it runs on a single core and other thing which I have already discussed in my past video is that concurrency you know the design what you make in concurrency must be it is not best suited for parallelism it can only become a bonus it it can be a bonus that you can take a concurrent composed individual units and you can run them in parallel but it was in the first place it was never intended to run parallel right for parallelism you need to design your tasks completely decoupled and independent also I would like to make one statement out here you know must be the statement you won't agree but I feel personally that parallelism is a subset of concurrency so concurrency is a bigger picture and parallelism is actually a subset of it also I would like to end this discussion of concurrency and parallelism with the slide out here you can see at the right hand side you know this is concurrency and this is parallelism so at the right hand side you can see that only one person he is juggling the balls is giving some time to one unit some time to second unit but he's not appointing one more person to do it right but if you add the left hand side if you see a lot of people are working they are having dedicated desk they are having dedicated machines they are having dedicated units they are all different individuals and they are working towards growing the company and performing better so at the left hand side the goal is more performance to do the things faster at the right hand side the goal is more about you know making you feel parallel right making you feel as if you know things are working parallel and remember that if you if you start thinking both of them as one thing then you would end up in a bad design so these two things have to be taught in a different perspective if you don't think from a different perspective your code will look very weird now definitely how much ever I say here whatever I talk here how much ever I demonstrate at the end of the day there will be a certain group of developers who will not agree to this and I and and I do understand because these words look so sin on him that people won't agree to it but at the end of the day we as a community have to come on a common ground around some vocabularies or else there will be a lot of confusion right so here are two more links you know which you can have look first link points towards rob pikes video where he discusses that how concurrency is all about design and parallelism is all about hardware's the second link is a stack or flow link you know which talks about you know how community looks at both of these terms so it's a very nice debate which is happening out there must be you can go and read those comments it can probably enlighten you further so this brings us to the end of this part 1 video now in the part 2 video we will talk about one more controversial statement you know that as synchronous or as synchrony concept uses threads at the background so that's a very misunderstood concept and what why did I create part 1 and part 2 because concurrency is very much connected with a synchrony so that's why I have kept both of these video one after another so let us start with part 2 does a sync use threads at the background so when you code up when you create a code which is a synchronous this nature does it mean that it has to create threads at the background thank you very much [Music] [Music]
Info
Channel: Questpond
Views: 50,949
Rating: 4.9047618 out of 5
Keywords: concurrency vs parallelism, concurrency, parallelism, parallelism in c#, concurrency in c#, c# concurrency tutorial, difference between concurrency and parallelism, parallelism vs concurrency, csharp concepts, c# basics
Id: 8Je1W82vwYM
Channel Id: undefined
Length: 22min 12sec (1332 seconds)
Published: Thu Jan 24 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.