Complete Redis, Websockets, Pub Subs and Message queues Bootcamp

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
yeah seeing today's lecture is going to be enjoyable and easy as we transition to 1 to 100 you will see this very often in fact you might have seen this in the cicd class as well okay you know this is cicd like it's so simple and um most you know as you move to system design As you move to senior roles the things that you are learning are harder conventionally you know it feels very jarring but once once you understand them they are so easy and they give you so much power this was it and you can you know negotiate better salaries or you know answer questions and senior engineer interviews so on and so forth and they're also more interesting they are one step ahead of everything we've done today is one such class um before we start the class I will share you know some where we are at what we are doing what we are going to do I've written a few points this is purely based on talking to a few people who are now being referred and you know I'm seeing some inconsistency some things that you should try to fix if you are going to if your goal with the cohort was getting a job um then number one if you weren't able to afford much you got 0 to1 0 to one people can stay till the end of the month learn some SC stuff for the next one month um and 0 to1 is almost complete there are some very off topic random things like open API spec that we need to cover C bot I'll add them as offline videos but I think it's time to move on to you know some some solid scor engine and stuff um start contributing to these repositories you have enough knowledge a lot of your peers are already doing it um and there are decent bunch of issues there's like a lot of things to improve one small thing I can point out is you know since soam has lectures now or you know we'll probably have DSA lectures a very good thing to add might be okay if this is a DSA track and you know there's a problem the problem statement comes right here it's not very difficult to add there are some parts that are difficult but UI is easy simple back end is easy um there are harder stuff which you know we'll cover in 1200 but simple things you can start to pick up there are a lot of issues there I would start contributing only then do you know okay do you really understand it can you understand other people's code so people who are in 0 to one try this out there are a lot of people contributing I'm not being able to keep up with PRS but I will like I'll be spending a decent junk of time here um please focus on English for remote jobs unfortunately you know the first filtering criteria is everyone wants to know can they talk do they have decent communication skills this was a great thing about abishek this is a great thing about har PR as well sorry I saying is her PR right yeah they're both pretty good in English even though they're good in Hindi they're good in English as well speak slowly get a mic have a decent background you know simple stuff if you're going for an interview as an intern I think this is decent is to ask kirat you to talk about 4cr yeah as an intern even I did not make this much in college like this was my this little bit more than this was my Goldman offer so I think as a college intern this is a decent task as a fresher fulltime this is a decent task if you have more experience frankly pick and choose but I think this is a decent task that's all I wanted to share before we start we're very close to I'm officially concluding 0 to one but there will still be more classes that will be beginner friendly sort of and you know a bunch of offline videos that will come throughout this month 0 to one people can stay this month with all of that context let's get right into today's session today's session is on Advanced backend communication this is the first topic of 1200 where we'll understand can we we were able to understand HTTP until now what else exists how can backend systems talk to other backend systems that is the context of today's class the most important implementation we will do today will be of websockets which is this thing you see right here but there are many ways for backends to talk to other backends we'll discuss a bunch of these ways today we'll discuss the implementation of websocket today tomorrow probably we'll go through either psub or messaging cues that's the goal for this week okay are very jarring but actually fairly simple backend variable to cover let's get right into today's session this is a brief graph of how backends communicate but before that you might ask kirat backends communicating why does one nodejs backend need to talk to another nextjs back end or goang backend why do does this need to hit this server usually communication happens between a client there is a browser over here that hits a backend system why is one backend system system talking to another backend system and the answer is as your application grows um you don't want to keep everything in a single server that is exposed over the internet you are literally saying by this backend server people can hit from their browsers and I will put everything here including sensitive things that you know probably you probably don't want exposed here also there is something called asynchron processes asynchronous I understand a little bit what is an asynchronous process a very good example is a notification that you might get on your phone when you transfer via Google pay when you transfer someone 100 rupees via Google pay the actual transfer minus 100 on your bank plus 100 on their Bank needs to happen on this layer most probably most probably the primary back end needs to take care of it but the notification the email eventually going up to them the push notification coming out the SMS going out can happen in its own time even if it happens after 10 seconds it's fine the primary backend server that's handling the core of your business should probably not be worried about this another good example is analytics if you have a lot of people coming to your website your primary backend server your Express server shouldn't be tracking oh a million requests are coming let me put them in the database it should be offloading all of this to someone else even if the final response you know comes after some time another good example of this is if youve if you go to any exchange or backpack. exchange for example the trading bit if I actually trade something needs to happen instantly but if you look at the rewards which you know let's say show you how much volume have you done um which you know as it says here $166,000 or whatever this even if it happens after one day it is fine this is not a necessary thing for the person to see right now but trading trading needs to happen instantly these are examples of core system backend handling the you know requests of the user and core thing that they need instantly and then a bunch of backend processes handling asynchronous requests after a day we will calculate how much money have they made another good example of this is if you go to Duan if you have heard of Duan Shopify if you go to my du .c and sell some t-shirt you will see all of your orders where is Duan I mean I won't open the dashboard but you get the idea if I log in over here I will see a dashboard like this the total sales are usually you know they come at end of day at end of day you see AA today you made 54,000 rupees sale but every order you're seeing immediately so this is an asynchronous process that will calculate how much money you made throughout the day everything else is a synchronous process these asynchronous processes usually run on different servers and your primary server needs to talk to them it needs to tell them okay by please after one day calculate the total money this person has made today on Duan what is Duan it's a place it's like Shopify an e-commerce website where you can sell products to other people and then you have a dashboard where you can track how much money they have how much sales they have done and you know their total sales of the day so on and so forth so this is how General backends are architected you don't put everything in a single place that's the worst idea ever this is also called micro Services if you have heard of it you have various small small Services talking to each other you don't have one monolith you don't have one very big server that is why backends need to talk to each other the question is how do backends talk does this backend server send an HTTP request to other backend server let's take an example let's say you are on Duan and someone made a sale as in your some end user bought your product a request will come here okay the product has been bought you will update the database the product has been bought should you tell the aggregator service what is the aggregator service the service that is aggregating your sales for the day via an HTTP call K by please add 200 Rupees to their total sales of today or should you send it to a queue where you can you know put a bunch of items in the queue okay herir sold one t-shirt for 200 rupees herir sold another t-shirt for 500 rupees so on and so forth and then another process picks it up in its own time 200 rupes I will pick I will add to the database 400 rupes I will pick I will add to the database or should they communicate via something called websockets we'll discuss what these are today or should they discuss talk to each other via Pub Subs systems what are Pub subsystems we'll discuss tomorrow most probably but as the name suggests they're called publisher subscriber systems your backend one can publish an event and that says someone has bought a T-shirt and then this backend can pick up and send them an email this back endend can pick up send them a text message this back end can pick up and aggregate their sales this number in the database so there are you know various ways for backends to talk to each other we'll discuss one of them today I hope it it's clear okay why do backends need to talk to each other cool given that context let me show share some examples of you know when backend systems would talk to each other and why when you would have such decoupled backend systems good example is for PTM when you say I want to transfer 100 rupees to someone it will immediately talk to the database immediately do a plus 100 minus 100 it might actually do this also VI an asynchronous proc process but CH let's assume this happens instantly but it will push onto a queue can send them a notification send them a SMS on their phone number send them an email harat why why can't it do it over here what if the email service is down what if whoever you are using for phone number is down will you you know make the plus 100 minus 100 in the database and keep waiting okay by phone number okay it hasn't happened yet retry try do you want your end user to wait no their thing is done they wanted to swap balance the balance has been swapped everything else can happen in its own time which is why you push these onto Q's what are Q's well as the name suggests it's a q but like har show me a real implementation of a queue we will see this tomorrow most probably but like a good example here is you know rabbit mq if you've heard of it or redis also has a concept of qes I will push this onto a que and then the push notification service if it is down it's fine it will still remain in the queue whenever it comes back up it'll start to pick up messages and send people push notifications her then the push notification might take a lot of time it does when you send someone money on Google pay when your swiggy order is delivered you don't instantly see a notification you see it after 20 seconds sometimes after 100 seconds so these systems can be highly asynchronous another good example is lead code which is something like we're trying to build on projects steps.com people come here and are able to you know submit coding problems when someone comes to your platform and submits a coding problem should you on this back end itself evaluate that you shouldn't why herat because the C++ program that they are submitting might have an infinite for Loop that will take up your core of this main backend system for running that V Loop infinitely so you definitely don't want to run other people's Cod directly over here har where should we run it then you should run it on you know some separate service some separate server you should not run it on the primary back end so what can you do you can take the user submission and push it onto a queue most probably lead code and even us you know when we build this we have two cues a premium user queue K Whoever has paid their submissions will reach here and then a free user queue okay someone has not paid their submission will reach here this is being handled by a small machine or you know just a single machine or two machines premium user cues is being handled by a bunch of big machines okay why this needs to there shouldn't be any record here for more than 3 seconds because these people have paid they need to have a good experience so this is another use case where you know your backend would delegate a task to other backends and then eventually whenever the submission is complete this big machine can tell the database okay you know accepted Reg rejected tle and then you know the end user can just pull the back end and get their status that's exactly how lead code does it we'll see this very soon but that's a brief of backend systems talking to each other how you build bigger backends um as you have complex applications what is a complex application great example of a complex application is lead code where this is the complex part in swiggy delivery tracking is a concept part the driver keeps moving again and again that system needs to you know track them and notify the user they are close enough um in case of PTM as I said notifications is it's not the difficult part but it's the part that needs to be delicated in case of an exchange as I said the order filling needs to happen immediately when you go here and buy or if you go here and sell then the buy or sell needs to happen immediately but reward Point calculation or you know um notifying the user your order was filled can happen in its own time another good example is if I withdraw my crypto if I say I want to withdraw you know whatever I have 36 usdc I want to withdraw 30 usdc click on continue this will also happen in its own time because this is dependent on the blockchain whenever something is dependent on another service for example you never buy phone numbers to SMS other people you usually depend on another service twio is a good example for this if you've heard of twio twio lets you send smss to other people similarly send grid is an example example of a company that lets you send emails these are highly unreliable apis phone numbers can go up and down SMS might not be received whenever you're dependent on something like this you maintain a queue why hirat because whenever this service pulls the message if it fails it can just push it back to the queue and then retry after some time that is why a queue makes a lot of sense for systems where you're dependent on an external service what is a good example blockchain is a good example if you're ever doing withdrawals on an exchange or um you know this is also a good example uh sending sms sending emails so on and so forth that is a decent context I I'm I'm hoping this is straightforward uh let's do a quick poll but then let's proceed to the next section are we good to go guys yes or no all right so next up let's understand the types of communication this is common interview question whenever you in a SE interview they will ask you if your last company was there backend Communications how many did you have microservices or did you have monoliths you will say sir we had microservices that's a more you know authentic answer to give then they will ask you how did your systems communicate synchronously or asynchronously now this is jargon I did not know what s I knew what http means I knew what websocket means I knew messaging cues Pub Subs everything I knew when this was asked to me in an interview I did not know what is synchronous communication what is a asynchronous communication but if you think about it it's pretty obvious if one system is talking to another system directly as in either via HTTP ignore this you don't know web socet right now so feel free to ignore this this is also debatable if it's synchronous or asynchronous but HTTP is a good example of okay by one system is directly pinging the other system this is one system this is another system it's directly telling the other system k bro please send message via let's say an http call this is a synchronous call do it right now and give me a response I will wait for your response when you are waiting for a response from another service what is that that is synchronous what is asynchronous by the same you know argument when you don't wait when you just push it to a queue and you're like someone else will handle as long as it was pushed to the queue you are good you don't have to wait for acknowledgement from the other service okay H I picked it up you push it to a queue you move on you do your own thing what is a good example messaging cues when this back end is putting something in a messaging queue it is putting it there and it's like okay someone else will pick this up and send the final email my job is done that is what asynchronous communication is messaging cues is a good example Pub subsystems are a good example server side events debatable but huh take CH I'll put I did put it here and websockets as I said is debatable why is websockets debatable herat if you know websockets I'll share quickly why it's debatable because when you have a websocket connection you send the other side data you don't wait back for acknowledgement that doesn't happen in HTTP when you send some HTTP data you wait for a response when you send a webet data over a websocket wire you don't really wait back for response which is why it's very debatable whether it's synchronous or asynchronous I would say it's synchronous because you are immediately sending the other service K by do this versus in these asynchronous systems you are publishing to something a messaging queue a pubsub and then the other services can pick them in their own time you're not telling the other service directly okay by please do this so that's a brief about an interview question what are s how do communic how did your backend systems talk in your last company whenever you're giving an interview sir they talked asynchronously we had messaging cues between our main primary server and our email server the primary server would push things onto a messaging cue the email server would pull it from it and you know send it out to the end users it's a brief of the types of communication this is just sort of an interview question I had so I thought I'll share websockets is the next next one hirat why are we learning websockets is this also used for backend systems to talk to each other not really websockets is rarely used for backend systems to talk to each other it is usually usually used for a browser to talk to a server har browser to talk to a server browsers to talk to a server via HTTP we have done this so many times you create an Express server why are you introducing a new protocol called websockets for browsers to talk to a server good question the answer is websits provide you something extra there is actually a small in efficiency in HTTP not inefficiency it's just something that's missing that web sockets provide you okay harir what is that inefficiency before that let's understand what are websockets the official definition websockets provide a way to establish a persistent full duplex Communication channel over a single TCP connection between a client typically a web browser can be a backend system as well but typically a web browser and the server something like this it creates a persistent connection harat how is it different from HTTP in HTTP you don't create a persistent connection if this is your browser and this is your server you send a request to the server you wait for the response as soon as you get the response you close the connection it is not persistent send get back response good to go what are websockets websocket say yeah a lot of communication needs to happen between the client and the server let's create one single connection the client or the browser can send events to the server the server can send events to the browser hence the word full duplex communication what does full duplex communication mean it means duplex both the sides browser can send events to the server server can also send events to the client can you ever do that in HTTP you can never do that in HTTP HTTP me server can't send events to a browser you cannot go to the server and be like fetch HTTP col sluser domain.com SL something the server can't ever find the browser the browser can find the server by hitting a certain endo and the server can respond back the server can never push events on its own websocket server can do that that is why you might want to use a websocket server for use cases like these given I've given the sort of definition what is a use case a practical use case of websockets when do you want server to push events to the client without the client even asking for it if the client need need something it can just ask for it again and again right why can't why does a server need to push usually a good example here is again exchanges or you know 10 different applications but exchange is a good example because you'll see a lot of activity here can you see these numbers going up and down can you see these trades so many trades are happening people are buying people are selling if you use HTTP to get you know what sale has happened what sale has happened can you give me the next order can you give me the next order what is happening here some event is happening some every time you see an order here someone is going and you know buying some Solana someone clicked on buy over here you see that order over here all of these coming back if you do this via HTTP you will send a thousand requests to the server and then every second and then it'll respond back every time you send a request you do what's called a handshake I don't know how many of you have gone through computer network classes but whenever you're sending an HTTP request whenever you're doing a fetch for example you know to api. backpack. exchange slet order something like that whenever you're sending this fetch request you are sort of doing a very big overhead of sending some headers and then create getting back an acknowledgement and then again sending back another acknowledgement a three-way handshake and then you actually send the data so there is a lot of overhead you have to send with every request if you have to send a, requests a second then you have to send this thousand times not good which is why you want to use websocket servers okay by the handshake happens only once handshake happens once and then everything after that is you send data you send data you send data you send data and in this case the data is you know the orders that have been placed harir can we see this in action yes if you right click click on inspect and click on the network tab over here and refresh you will see a websocket connection which is very hard for me to find because my search bar went away this happens every time but let me search for it there you go ws. backpack. exchange if I click on this very similar to an HTTP request I see a websocket connection and if I click on messages I see a bunch of messages the server is sending me if you look at the body of these messages you will see it is sending you back you know okay the depth change over here the depth change over here what is depth don't worry about it but you know any realtime events that you're seeing all the trades that are happening this order book being updated is coming via a websocket connection this is a primary use case of where you want to use a websocket connection why herat why can I not keep sending okay give me the current order book give me the current order book overhead you'll be sending 20 30 requests a second just create a single connection tell the server K by please keep sending me messages and then the server will just keep sending you the current state of the Autobook the current recent trades that are happening so on and so forth that is what websocket sort of connections let you do use cases for sockets realtime applications what is a real-time application a chat application is a good example even you know this everything you see over here is a good example all these events are happening in real time you don't want to refresh the page you want these come to you quickly that is why that is one thing place where you want to use websockets live feeds this actually is a better place to put you know a crypto exchange Financial tickers the graph going up and down even if you open binance you will see you know a very similar order book over here that is getting updated this price graph will go up and down this price will keep changing as orders get matched as you can see it became something else then something else this is a realtime application a ticker ticker being you know the current price is a good example of where you want to use realtime communication Interactive Services yeah Google Docs or you know whatever replit with two people typing at the same place is also a place where you want to use web soins why not use HTTP as I said you will keep sending requests again and again and again and you don't want that you don't want to send there's a lot of overhead that comes with every request what is overhead herat your you're sending a lot of extra data along with please give me soul price you are doing a three-way Network handshake which is you know very computer networks concept but if you've read through it you're sending a request the server is responding back I got you and then you respond back with I am I also got your acknowledgement you do a three-way handshake and you are doing it again and again for every HTTP request and you can still do this companies still do this this is even though it's an ugly approach Company still do this in fact lead code does this when you go to lead code.com and try to you know create a submission let me quickly do that my God my eyes white mode okay there you go dark mode if I do a submission over here if I go to to some problem open the network tab click on network when you click on submit as I said the application might take 1 second 10 seconds doesn't matter eventually it'll do your submission and it needs to push an event it needs to tell you okay your submission succeeded or failed so websockets is a good use case here because the server needs to push an event whenever you're submitting to lead code if this is lead code server you tell them I have created a problem or you know submission it will tell some random it will push this onto a queue some other service will pick this from a queue and then actually do the submission and then whenever the submission is succeeded it will put this in the database okay ha by successful or failure ideally when this event happens it should also tell a websocket server the submission failed or succeeded and then this backend websocket server should respond back to you you know you got a wrong answer that's how I would expect to build this system but how does lead code do it lead code uses something called pulling what they do is okay by you submit the problem it is still doing a very asynchronous operation you know request going to a queue getting picked from a queue getting solved and then the final submission being stored in the database but the user doesn't connect to the lead code server via a websocket connection I repeat the user doesn't connect to the lead code server via websocket connection it just keeps asking every 1 second is my submission done did I get a tle did I get a wrong answer it keeps sending that request again and again it keeps polling the lead code server did it happen did it happen this is another way to do realtime communication you can always poll a server okay by did my thing happen you wanted to send me something can you please send me rather than the server sending you something you can just keep asking again and again do you have my data do you have my data do you have my data and that is how lead code does it if you click on run over here you will see a bunch of Slash Che requests are going out the first request does the submission this one most probably inter interpret solution we look at the payload I am sending my code over here and then a request goes out to slash check that responds with status pending another request goes out to/ check status pending another request goes out I am polling do you have my submission do you have my submission and then eventually you know one when the when this whole process succeeds and the database has your final solution when you ask for the data you get back you know okay H your submission was accepted rejected whatever run code error so yeah long polling is another way to do real time stuff should you do it probably not depends in this case Works does it work in case of you know a place where so much data is incoming so fast no for an exchange does not matter does not work for a lead code like solution might just work in fact it does work they're such a big company they still do this cool that's how that's what are websockets um next up is just a bunch of implementation how do you create a websocket server how do you scale a websocket server um so writing a bunch of code fairly straightforward code um let me do a quick P are we good to go guys all right so websockets in nodejs let's understand how can you create a server a websocket server how can you send data from a server to a client client to a server there are many implementations of websocket in node js just just like you can create you know um if you want to create an HTTP server in nodejs you can use express or if you remember you can use I don't know if I introduced Kaa but that's another HTTP framework or you can introduce uh use you know hono these are all implementations of the HTTP protocol these are all implementations of the HTTP protocols these are code that a bunch of people have written and then you just use their code and create your HTTP server you don't worry about how the HTTP protocol actually works these developers have worried about that you just use their code similarly websockets is also a standard you can read about how should a websocket server be on you know [Music] ietf this RFC and you know it'll tell you the websocket protocol does this and then you know you can read about how a websocket server should be built when you're in a Trading Company If you ever join Jan Street if you know the what's his name called the YouTuber called jch he used to work at I don't know some Trading Company he built like he literally was his task was can go to this read about the protocol and implement it from scratch and see a lot of times companies that really care about latency of systems talking to each other will build a websocket server from scratch in you know some low-level language but does it matter for us no we will highly depend on you know someone else who has written all of this code in JavaScript cc++ doesn't matter um just because you're using a library like this that says you know that's on npm doesn't mean it's written in JavaScript it actually can be written in C++ as well and have an API on top that's in JavaScript so I I don't know how this websocket client is written but it could have just been written in C or C++ either way these are three popular implementations the first two are pure websocket implementations the third one called socket.io is something you might have heard of might have seen this is the first beginner friendly way to you know do full duplex communication for backends to talk to front ends and front ends to talk to backends in a persistent way you probably don't want to use socket IO there are a few problems with it the biggest one being it's a even though it's written on top of websockets and you can sort of convert a socket iio server into a websocket server it's harder to support multiple platforms if you create if you create a socket IO server your website will be able to talk to it but if you're creating an exchange you also have to create an app what if you create an app that's in Android then you need a socket IO implementation in Android there is one that exists but what if you want to create you know a rust server that needs to talk to your socket IO server then you need socket IO implementation in Rust and all of these exist like there is a socket IO Android client there is a socket iio rust client but you know these are not maintained as much websockets is a very Global protocol so websocket clients generally exist very frequently there's a Android websocket connection websites to support uh websockets by default so you know in the browser fetch just like there's a browser fetch API if I right click click on inspect just how fetch just exists as an API to send a backend request websocket also exists as a construct on the client that you can just use though browsers to support them from scratch rust also has a websocket client implementation though even though you can use socket IO probably shouldn't a lot of companies started here because it's very easy to use it uh it gives you some constructs like rooms that let you do you know very simple that let you send data to clients very easily but you should avoid it um and you know just stick with the core websocket protocol now let's get to some coding um fairly straightforward code if you've written an Express server this should be very easy to write um let's write websockets in nodejs but you know the protocol remains very similar you can write a noj server sorry a websocket server in Rust in goang it doesn't matter everywhere it's you know the protocol core Remains the Same there is a persistent connection between a client and a server if you want to feel free to follow along I'm going to write this code even though it's fairly straightforward code uh but yeah let's write it from scratch um the first thing you have to do is create an empty folder in your laptop so let me do that let's call it week 19 websockets and then let's initialize an empty nodejs repository here npm init dasy to create an empty package.json I repeat create the repository sorry create the folder go into the folder and then do a npm init Dy to initialize a package.json then initializer TS config.js by running npx TSC Dash Das init now open it in Visual Studio code all the steps are here I'm assuming it's pretty straightforward by now so not going to wait a lot but I'll still do a poll week- 19 all right I have the project open here in Visual Studio code if you look at the files simple package package.json simple TS config.js create a new folder call it Source create a new file call it index.ts I will take a pause here and you know do a poll U feel free to take your time if you are lagging behind I know a lot of people like to code as well can I proceed guys can I proceed yes or no and select no if you want to wait me to wait for 5 minutes can I proceed no okay that is fine I will wait no rush let's take a you know small mini 5 minute break let me answer a few questions in Q&A until then all the steps are here try to create a websocket server by our own can't we use HTTP pipelining for making HTTP persistent first time I'm hearing this HTTP pipelining and the small answer is probably no U multiple HTTP request to send oh okay I have heard of this it lets you send multiple HTTP requests with response does it let the server respond back without you sending a client request maybe it does um there are a lot of sort of ways for a front end to talk to a back end and exchange data two more protocols that you know happy to talk about here are quick qic and web RTC if you use use both of them then also a client can talk to you know another client Cent or another server persistently there are various ways for you know you to do two-way communication between two clients um but yeah web soet is the most popular one for real-time communication won't the server get overloaded with so many multiple connections it will so more horizontal scaling in case of Ares that is correct we will get to that in the last slide is that why lead code is ditching for cost savings I think it's just okay it's just complex to maintain a web stock why do you want to create a fresh server and then you know make your user do two connections one an HTTP connection every time and then another lead code server I think just to keep things easy they're doing long P I think money saving is something in their head um red is not open source going to ignore the yeah will you cover rust as well in web3 cohort something I'm thinking it'll be very long if I do uh but we won't be able to do Solana contracts very well if you don't know us we'll see um what is polling long versus short polling polling is what lead code is doing every second bro do you have something no I don't B do you have something I don't I have something I don't long polling is okay bro whenever you have something respond back take your time I know you have something uh your you know submission will eventually succeed whenever you get the response maybe it's in 10 seconds respond back to me that is long polling okay by I'll wait I'll wait I'll wait keep keep my connection open when you have the response send it back to me what is polling okay by do you have the response yes or no no I don't do you have the response yes or no no I don't that is polling what is long polling bro whenever you have the response send it back by this HTTP L reshare screen okay we'll just take a proper break only then um and you know 7 more minutes we'll start at 7:50 ishan Sharma just ping uh rookie we need t for web3 cohort it'll be a lot of questions because you not a lot of content on it um please explain why not to use socket IO yes the reason I said was okay if you created a socket IO server very good you created a website very easily you can you know connect from this website to socket IO but now you have an Android app now you have to include a new dependency in your code code Base called you know io. soet iio soet IO client 2.0.0 every platform thankfully so socket iio has good support it's not like it will not work in you know a Java program or a r program they have created clients but it's not the core protocol that ietf created ietf Created websocket every language has a websocket support Android may you can simply I don't know you probably have to have a dependency here but you know website may you simply have websocket as a construct even in Android a lot of support for websocket clients what is a client something that can connect to a websocket server socket IO all clients are Community made okay by socket IO came to we will have to create a client in Rust in go so on and so forth so in C so for that reason you you will have a lot of dependencies that's the only reason um yeah do it next year we will eventually like today polling causes a lot of database calls it does you are pulling it not a lot like you you can handle this scale how many people are coming to lead code at a certain point 20,000 how much how much are they pulling once every 2 seconds your database can handle 5,000 requests a second what if adding to the Q and Q is down does not accept it exactly L that is a beautiful question the question ISAT in fact if you go to this video of mine arat Singh road map full stack somewhere I say if your Q is down you are screwed this one go through this video pretty much covers what we're covering today complete full stack road map is be I abuse a little bit when I say if you're if you're Q is down your screw Q you know if this is your server and you know your end user is like okay bro I want to send my friend 200 rupees and the queue where you are storing you know all the events is it itself down then what will you do you can do a few things either you can revert the transaction can n are sorry you will not get 200 rupees our notification system is down probably shouldn't do it if a notification gets missed it's fine what you can do is you know send data to another backup storage for now whenever you know the queue is down send the data to a backup storage and then whenever the que comes back up seed the data from the backup storage what if the backup is down yes what if the server is down of course if things are down your system your people will see downtime the question is what do you consider as you know an important event if you consider push notifications as an important event then even if the database call succeeds you will revert the database call and tell the user sorry our systems are down right now please don't we won't accept your request um what if back in one goes down then your website is down if if back end one goes down then your end users cannot communicate to your system on PTM if you send someone money it'll say request failed it'll show you an alert your primary back end is down your primary back end is down doesn't happen very often when it happens your systems are down you can't do anything if your auxiliary services are down then you're still fine if your auxiliary services this is down this is down this is down it's still fine okay you know main backend server is up database calls are happening if it is a PTM app application the person's balances are being updated they are not getting notified it's fine but you know the primary thing is happening but if this itself is down you're not you can't send other people money as simple as that all right I think we have two more minutes backpack websocket layer is rust simple rust websockets nothing new where do you implement Q in the backend itself or another server it is you know a q is just a server that is running rabbit mq there is a server like let's say whatever an ec2 server that is running either rabbit mq or running redis whatever is your que implementation it's just another machine it's nothing special it's a machine that is running a protocol or you know a project that supports cues that lets people push to a queue that lets people pop from a que having error while doing npx ts-- init just copy a random TS config from a project here go to 100x devs code to open a random project copy over the TS config from there it just creates a TS config so nothing new goang is not in web3 we would have done goang if you were doing Cosmos we're not doing Cosmos most probably so one more minute guys then we'll start since cloudfare workers don't use node as the runtime then how do you use websockets there I'm sure there's a websocket implementation there web oh are you asking about websocket clients are you talking about websocket servers you cannot create cloudflare workers expose HTTP end points they don't let you do serverless websockets in fact serverless websockets are very hard so if this is the you know your Cloud flare Fleet it only exposes things by an HTTP layer if your question is K can my this thing Cloud flare worker create a persistent connection to another websocket server that's a normal websocket server probably can it even though it's not in you know I'm sure the cloud flare runtime you know supports websocket clients but this is serverless you probably never want a serverless server to have a websocket connection what is the point of a persistent connection this goes up and down very quickly so yeah serverless is a big no no for websockets all right guys 750 let's proceed um we will be understanding a web stocket implementation in node just just writing some code you can literally copy this code and ask chat GPT can you give me the same code in Rust it will can you give me same code in websocket sorry goang it will so you know the code pretty much Remains the Same the the main goal is the idea how do you create realtime connections how do you scale realtime connections does not matter what stack your websocket connection is written in let's kick things off off by writing it in nodejs hopefully by now few of you have followed these steps um let me quickly follow them as well I'll do a poll after I finish as well I created a TS config.js I'm going to change the root D to be/ SRC I repeat the root D to be source so that all my source code is picked up from there out D to be do/ dist by the final output should be here then I will add the Ws Library as a dependency as I said there are various libraries just like there is Express there is Kaa there is or C whatever it's called hono similarly there are various websocket implementations we are using this one in the middle right here so I will add that as a dependency and then here is some code we can rewrite it we can copy it and go through it it doesn't matter either way this like less than 8 10 lines of code one thing that you should know about creating websocket connections whenever you're creating a websocket server you are actually creating an HTTP server only if you go through the protocol the websocket protocol how is it build the first connection that the browser makes the first request that the browser sends is actually an HTTP request only it gets upgraded to a websocket connection on the server but the first request that goes out is an HTTP request which is why whenever you are creating a websocket server there is actually an HTTP you know server running under the hood it is still Exposed on a certain you know Port which in this case is 8080 so you are actually creating an HTTP server only the server whenever it gets a websocket request it upgrades the request to be a websocket full duplex connection so I have two implementations of a websocket server one using the HTTP Library which is a library we haven't ever used this is the native HTTP library in net nodejs and then there is Express which we have used many times so feel free to choose which code you would like to write in the end there will be an HTTP server that will be handling your requests so the very first thing I do here is create an HTTP server by calling HTTP doc create server and saying any any time a request comes just send them back hi there if it is an HTTP request very similar to me putting you know a app.get slst star send the user some response if you don't understand this just go through the express code this is how you would write HTTP servers natively in node chz what does natively mean it means without depending on any external Library you saying we don't need Express yes you don't really need Express HTTP is good enough then why do we use express Express gives you very clean routing it gives you very easy an easy way to create middlewares it has a very nice ecosystem of middlewares that you can use that is why you use express but you can still create an HTTP server natively in in node J you never have to do a what does natively mean it means you don't have to do npm install HTTP this comes bundled in nodejs itself create the server and then we will use the web socket Library the Ws Library over here const WSS equal to new websocket server and give it the HTTP server as an argument you can also do this without HTTP servers you can do no server col and true it will still create an HTTP server like you need you still need an HTTP server but you know a lot of times you want multiple websocket connections you want can whenever a user connects to ws/ SL api. binance.com ws/ one thing they go to one websocket server and then you know if they connect to whatever I don't know ws/ user data they connect to another websocket server to you can read through you know the code of WS Library they have a bunch of examples on their website let me quickly show you that let you do very fine grained configurations of you know creating a websocket server as you can see it lets you do something related to Performance how much should you allow the user to send sending and receiving data example sending binary data over the wire so on and so forth like they have a bunch of examples we are choosing a very simple example the client can send some data server can respond back with some data so create an HTTP Server create a websocket server instance new websocket server and then this might seem a little jarring so let's come to this later but lastly server. listen you can remove all the websocket logic comment all the websocket logic this will just be a simple HTTP server that's handling requests when you add the websocket logic is when it becomes a websocket server this logic will remain the same irrespective of if you are using Express or HTTP this logic changes okay what HTTP server are you using which right now is the native HTTP server now let's come to the most sort of jarring part of this specific Library um because it uses call backs I hope you are comfortable in callbacks by now um if not let's try to go through this code it says websocket server dot on connection which means pretty Reas enable right what it says anytime there is a connection anytime someone if this is a you know uh client over here and then this is a server anytime someone initiates a connection control should reach over here control will reach this specific function and you will get access to something called you can call it whatever you want but you know a websocket instance a socket instance and on this socket instance you can be like any time there is a message WS WS Doon not WSS Doon WS or you know for to make it even cleaner let's call this socket socket Doon error any time there's an error just log the error if this feels jarring this is similar to you know the error and then console. log error and anytime there is a message then to every client that is connected there might be multiple clients there might be this guy there might be a second guy so on and so forth to every client that is currently connected on the web socket server WSS do clients. for for every client if the socket connection is open to them because a lot of times it takes time for the socket connection to close or sorry to open a lot of time they might drop off so this will you know throw an error so you should make sure that the connection is actually open and and then send them the data if the data incoming data was binary send them binary data if the incoming data was asai or You Know sample simple utf8 or text send them the text this field jaring remove his binary as well we will anyways not use any you know binary data today so this might feel a little weird too many things have happened also this should be socket these two are just registering a few events this one is if error happens do this if a message comes do this this is okay by as soon as the user connects send them a hello tell them hello you have connected to the server send them a message so this is a simple send these two are event registers we have registered whenever there's an error on this socket log the error Whenever there is an incoming message for every client that is currently connected send them this same data that was sent by the end user very similar to a chat application right if you are in a chat application if one user sends you a message you broadcast it to everyone you send them this person said hi you send them hi send them high send them high in a multiplayer game if I shoot you or if I move one step ahead for example if you know if you're this is like another one of other company I used to work at that was a real time game very similar to a real time game something like this where people can come talk to each other move around if I move everyone in the room needs to know I have moved to you know X comma Y which is why I send a message here it gets broadcasted to everyone probably does not come need to come back to me but even if it does it's fine needs to be broadcasted to everyone that is exactly what our server is doing Whenever there is a message broadcast it to every client because this is the most common use case you might have some logic here you know if client. room equal to harat room only then forward them the message or something like that but you know let's say everyone is part of the same room in you know a game like this everyone is part of the same room if I move a bit everyone's machine needs to see harat has moved to X comma Y and we can you know ignore this part I think that was a decent explanation but if not we will do a poll let's quickly run it and see it in action let's see if whenever you create a connection do I get back a message from the server and whenever you send a message does it broadcast the message to everyone that is currently connected let's see how do you create a connection to This Server I have written some server code I have written this thing over here how can I write the client code how can I make this client connect to this server first let's run the the server TS c-b to run the server compile the typescript code into JavaScript code node dis index.js to run the server this will start your HTTP server on Port 880 or you know whatever websocket server whatever you want to call it given that this server is running how can I connect to it you need clients either a browser or something like Postman Maybe Postman let if it lets you send HTTP requests maybe it lets you send websocket connections as well or a Mobile app all of these can be various clients to an HTTP server let's use something called postwoman which was an open source project created someone created a few years ago when I was in college eventually got acquired renamed I don't know but it's called hopscotch.in it looks very similar to postman it lets you send HTTP requests it also has a realtime layer I think someone told me Postman also supports this now I could be wrong but this definitely supports this I've used it many times so what you can do is go to hop scotch.io h o let me just post paste this in chat and go to the real time section what is the real time section y say you can send HTTP requests from here you can create web socket connections so let me try to create a web socket connection to Local Host colon 8080 the way to do that is click on connect let me inspect go to the network Tab and let's try to click on connect see what happens I'll wait for 15 seconds what have I done I followed all the steps here to create an HTTP server then I have run the HTTP server locally by first compiling the code and then running the code it is running on Port 880 if I go to Local Host colon 80 I do see high there so if I send an HTTP request Then This Server does respond back with high there if I create a web socket connection let's see what happens again if I go to hopscotch.in URL let me expand the network URL click on connect it's says connected over here and as you can see there is a web socket Connection in the network logs very similar to the one I showed you on the backpack exchange or on binance everywhere whenever you create a websocket connection you can track it in the developer tools thankfully you know Chrome supports this now and if you go to messages you will see Hello message from the server where does this come from the server sent me a message it comes from this line right here what was this line it said whenever a connection is created send back to the user Hello message from the server if I put a log here user connected and then you know maybe maintain a user count let user count equal to zero connected user count Plus+ user count I can see as users connect and you know I can log that in my terminal if I restart it and you know try to connect one more time now I see user connected one let me open another Tab and connect from here now I see user connected two let me send a message from here hello from client 2 send I get back some object there's a bug somewhere we can fix it very soon but I do get back a message from the server I also get back a message in my other where did it go oh there it is in my other one as well it says object blob because I removed all the is binary logic let me add that back comma [Music] binary data is binary was this the logic let's look at the slides binary is binary all right binary is binary all you can just I'm sure default it to false and it will still work because we're not sending binary data we're sending textual data let me restart I just reverted the code to what it was here and then if I connect from here and then connect from here it says two clients are connected because one is connected here in browser one another is connected here in browser Tab 2 if I send a message from the first one hi from client one send I get it back and then the other user also gets it back we have created a realtime chat this guy can send high from client to it comes back to me I also send it I also get it back other guy also get it this is how you can do two-way communication I am sending messages to the server server is responding back to me server is also sending the same message to another client very common way to build a multiplayer game a multiplayer chat system so on and so forth is this clear enough because if this is then we'll be moving to something more interesting are we good to go guys do we understand this oh not a lot okay we will take a pause let me answer question I'm sure there's something obvious I'm missing what's up I'm looking at Q&A not working not working not working not working not working in Express not able to do this in Express export is not working recap all right let's first try this in Express let's go back to this thing right here let's copy the code I did not test this oh no I did not test this but I think it should work let's see let's copy the express code let's paste it here let's run a npm install Express and add types SL Express let me run TSC DB to compile the code let me run node dis / index.js to run the code it's a long running process it is running on port 8080 it might not work I did not test this there's a good chance it doesn't work if I connect it does say connected for me if I go to the other one connect it does say connected for me I get back a Hello message from server let me send oh it says disconnected as soon as I send a message an exception comes web socket is not defined let's see there you go it does not work because there is a bug what is a bug web socket is not defined can you import it yes s we can should it work now probably should this was never defined which is why you know you did not sort of this this line sort of crashed I just imported it from WS I think this is where it comes from let's try that one more time connect connect hi from client one I get back high from client one other person also gets back high from client one so what was the bug when I was copying code around I forgot to import this from the Ws Library other than that does everything remain the same I import Express from Express I create an HTTP server forget all the other logic I have written just focus on these four lines app.get slash rest. send hello world very simp simple express application remove all websocket logic import Express from Express const app equal to express if there's a get request here send them hello world and then app. listen 8080 don't even worry about you know assigning it to a variable simple stuff done this many times if I start this what happens local SC 880 if I go I get back hello world if I go to a random route it says cannot send a get request to/ ASD if I go to this it says hello world very simple HTTP server using Express now what do I say I say I will also create a websocket server const WSS equal to new websocket server where the server is which one this HTTP server over here given that we have this HTTP server you write this logic you only have to write this once I know this can be a little jarring but you know eventually you will see okay you know whenever we create a big application let's say we create a game then you just send this request to another request Handler this much is your code usually data something like this and then you know request Handler exists someplace else so your boilerplate code is this much only your request Handler has the meat of your logic okay if the message was you know move if this is a real Real Time game if you're creating something like this the message that the client will send you will be something like okay I have moved up or I have moved down so you know your request Handler will have all the logic to handle it your index.ts file won't look this big and ugly but since right now we're not writing any application logic we're creating a simple chat app this is what we've written Whenever there is a connection if there's an error log the error remove this Whenever there is a connection control should reach here what does control should reach here mean it means your JavaScript thread will reach over here and it will have access to a WS variable anytime there is a message from that client control will reach here you don't have to do anything complicated simple I received a message from the client that's all and and you just log it on the server you don't send it back to anyone if the user sends me a message server logs it and then remove this as well let's create a very simple application let's create something like lead code where the user rather than sending you messages via HTTP will now send you a message via you know uh this message letter less than 19 lines of code why would the user do this in lead code why would they send why would they not send a get request or post request theoretically I get it they would not they shouldn't send they should probably send an HTTP request but let's say you want them to send whenever the person is on lead code and the person is solving a problem and the person clicks on the submit button or run button you want them to send the data from a websocket connection theoretically let's say then we're just logging it right now of course eventually you would put this in the database do a bunch of things but we are just logging this this is very simple websocket logic create a websocket server Whenever there is a connection for that specific connection Whenever there is a message log it on the server let me try to compile and run this try to go through this and marinate this code specifically these six lines of code for like 15 seconds then let's try it then we'll do another pool this does not really use the power of websockets it's not doing server side events it's not pushing data from the server but you can you can always write that logic you know okay ws. send and whatever but cool we're not doing that right now simple straightforward just receive messages from the client log them on the server if I run this code now me put this in the left put this on the right I go here and connect anytime I send a message it gets logged over here on the server send messages server is logging them very similar to http only I'm not sending a new h HTTP request every time I've have created a single websocket layer if I disconnect and reconnect go to the network tab I have a single websocket connection and as I send messages let me send a few messages I'm sending all the messages over the same websocket connection I'm not doing a fresh handshake every time I'm not saying sending multiple requests single request May I'm resending data again and again and what is the server doing with this data right now it's just logging the data can it do more yes it can respond back to the user it can send messages to other websocket clients that are connected it's not doing any of that right now but do we understand websockets and implementation in nodejs 85% 83% good not great but as always 15 20 more minutes of class than questions at the end so let's try to wrap up because there's not much left after this um I think at least so give me some time to wrap this up and then I'll take questions at the end I'm sure if you go through the recordings you'll be fine eventually let's go through the client side code next um and then as I said questions from 8:40 onwards I think we'll be done with the next 20 minutes or so cool let's proceed to the client side code given you have a server as I said the server can be connected from a browser let's say you know a react app or a next app or from Postman or the thing that we just connected from hopspot or from a mobile app question is how does a react application connect to the server how can I create a game very similar to you know gather. how can I create a chat application very similar to any chat application you might have seen in the past the answer is as I said when you create an HTTP server when you created an HTTP server how did your browser talk to it how did your react code talk to it how did your normal HTML code got talk to it harat I used the fetch API the browser provides the fetch API or we use this Library called axios which also under the hood uses the fetch API to for HTTP connections I use the fetch API and similarly if you have a websocket server you can use the web socket API if you search for web socket over here it already exists it gives you very nice constructs to connect to a websocket server let's see how that happens in a react application for a react application let's quickly initialize one first first make sure you have a nodejs server running locally make sure you have a Express server running locally preferably an Express server that you know has the same code over here or over here if you're using this code make sure you import web socket from the Ws library or you can use this code as well copy that code and paste it restart the websocket layer and create a react project maybe right next to it maybe right next to it maybe not uh right next to it is fine just create it you know create a fresh project create here does not matter um the reason I don't want to create it separately is because if you close this then you know this will also close um so yeah you can choose how you want to do folder management but I am just going to create it inside the backend project which is something you shouldn't do you should have you know a root folder which has the back back end code and then the front end code mono repo style but what am I doing let me also do that me that's fine let's start the websocket server in a terminal let's start it in a separate terminal so that we can open the react project here to let me open a terminal over here I'm already in week 9 WS I'm going to start the websocket server here the websocket server is running now I will create a freshh react project in a different folder and open that in Visual Studio code hopefully that is clear this one is running the web stocket server let it run because we have to connect to it back end is running here front end we are going to create now so first create a fresh react project npx create V at latest npx I think it is this I could be wrong it is something else what is it npm create wheat at the rate of latest my bad npm create wheat at the rate of latest will create a fresh react project call it react WS I already have that so let me name it react WS week 19 and select react as the framework that you're using typescript as the language that you're using let me go to react ws week9 and open this in in Visual Studio code would urge you to do the same create a fresh react project open it in Visual Studio code then run npm install to make sure all the dependencies are installed you don't see the red squigglies that I see over here we'll wait for 2 minutes before I proceed all right dependency R install for me but I'll take a break let me look at Q&A just initialize the react project until then how to persist soft data in DB example persisting group chats how do you think just store wherever you have the request Handler for example you know here you have it in here whenever you're getting the data just do a Prisma call over here you can optimize it you probably don't want to do DB calls here you want to again send it in a queue and eventually you know have the database be eventually populated because you know a lot of people will send chat messages you don't want a lot of DB calls you probably want to Bulky the DB calls so you can do a bunch of optimizations but you know the simplest approach is do a db. store over here how are we differentiating every user you don't care about the platforms they connect from every client you will have a client ID you will have a you know this client. ID a way to identify them they might send a first message that shares their cookie shares their token so you can identify them on the server then you know eventually uh every other request you can identify so this is this person we will not be covering Nest how does hosotch know to subscribe on messages to send the messages I think you are coming from the socket IO World question that you're asking is how does it know okay the message it needs to subscribe on messages this is the only thing that websockets send you send send a type message you get back a typee message you can't change this to you know event one event two like socket I you can so I think the reason you're confused this because you've used socket IO in the past both you send the call back says any message that is coming will come here and whenever you send you send back a message there's no different types of way to send very similar to socket IO where you can be like you know socket. on a socket. on B can't do that here how can I send message to the specific user like WhatsApp yeah you I trate over all the clients and then you know find that specific client and then do a client. send for that user um eventually this will come at some in some project this will come when we build whatever zeroda you know you can get this answer question will get answer there all right cool can I proceed do you have the backend running locally do you have a simple react project started locally are we good to go okay so whenever you have a react project what do we do we go to app. TSX and you know remove everything we don't have want any random logic here then we do you know put routing over here we have react router Dom All That Jazz we're not going to do any of that today we'll put everything in app. TSX ideally you should have a chat component if you're creating a chat application and there you should create the socket connection I'm just creating everything here create a state variable called socket you don't really need it right now but let's create it socket comma set socket equal to UST State null it is null initially why why is it null we'll see very soon we will see initially it's initialized to null and you know you can also be like if not socket then return loading so that while the socket connection is being generated the user sees a loading and then create a effect at the top a use effect that runs whenever the component mounts which means you know the first time the application mounts to create the actual websocket connection const socket I mean uh AI is helping me out too much here but yeah there you go I'll go over it one by one um let me just type it out myself const socket equal to new web socket where is this coming from you did not import it it is natively present in the browser apis just how you never have to import fetch whenever you're sending a fetch request you don't have to import websocket harir I had to import axios yes axio is an external library that you have to import but fetch you don't have to import websocket you don't have to import you create a websocket connection and then what do you do socket. on open when the connection gets completed you have a open connection we're just logging connected you can do a bunch of things you can you know ideally here you should do set socket socket you should set the state variable here okay now you are connected now the loader should go away the user should see a loader until the socket connection is actually open if I do this if I do a set socket outside then you know the effect will run even when the socket is not open the user will see the final page shouldn't happen you should user should see the final page only when the socket has been open so you can at Le say something like this connecting to socket server dot dot dot something like that and when the connection opens you actually set the socket variable you will see a type error over here the type error says argument of type web socket is not assignable to parameter of type set State action null it's basically saying yeah you to told me the type of this is null how are you trying to set a websocket instance on it and the answer is whenever you are creating it you have to pass the types that this can have as a generic this can either be a null or can be of type websocket that is how the red squiggly will go away here whenever you're initializing use State you can pass the types that it can become it can be null or it can be websocket it is initialized to null but eventually gets set to socket whenever the socket is open you log connected and anytime you receive a message from the server you log message received harat you have written the logic okay when the server responds we log something how can I show that thing you can just pull it in a state variable right right you can be like const messages set messages equal to use State Mt and then whenever you receive a message you can be like set messages m dot do do M comma message. data something like that what did you write over here basically means append to the set messages data if you don't understand this you can also just be like you know const latest message comma set latest message equal to empty string and anytime a message comes from the server you can be like set latest message message. data whatever data came back I'm just going to set it in this state variable and let me write this state variable here as well latest message can be rendered over here initially it will be empty but as the server responds back with more and more messages when does this run when do you when does the control reach here the control reaches here when the server sends you a message very similar to when we were on postwoman or it went away but you know when we were on postwoman or hopspot we were seeing a bunch of messages here right if I connect the server is down node is/ index.js if I connect I got a Hello message from server whenever I receive these messages from the server control will reach here on the client on hopspot is also a client and what does this guy do when it receives a message it puts it here in the queue what do we do what does our browser application do whenever we receive a message we set it to a state variable which gets rendered eventually over here that is how you create a simple application harir you have written the logic to receive messages how can I send messages you have access to the socket variable you can just be like you know have an input box followed by have a button that says send and you can be like on click equal to socket do send whatever message you want to send whatever is the input body here or you know hello world so that is how you can send data from the back end as well sorry the front end as well this is how you can receive data from the front end so if you look at the whole code one more time from the top I initialize a socket variable I added some extra types over here to make sure we don't have any red squigglies if you're using JavaScript you don't need to add this you can also always just use the any type if you don't understand this then we say initialize it to null and while this is null if not socket the browser should say connecting to socket server the user should see a loader okay we are still connecting wait then we have a use effect that says anytime the component mounts create a fresh websocket connection and anytime it opens is when you actually set the variable okay by the socket connection has opened now we can get rid of the loader now we can show the user an input box and you know a button where they can start to send messages if you don't do this then the user will try to send messages and it will throw an exception K I haven't yet connected the connection isn't open so we make sure only if the connection is open we set the socket variable I can remove this last line and then whenever we receive a message currently we just log it and set it to a state variable which eventually gets rendered here so as the server responds my user will see the latest message let's see this in action npm runev if I open this it shows me the message hello message from the server Heria did not show me the loader because the socket connection got created immediately if I add you know some artificial delay here set time out oh you can't wait here you get the idea if I add a artificial delay here if this takes a really long time if on open takes a really long time then you will see the loader over here since it's on my local machine the websocket server is on my local machine it happens so quickly you don't see a loader over here I can send messages by clicking on send and whatever message I'm sending the server is responding back with a Hello World which is why I see Hello World over here where is this logic if I open what did we write we said whenever there is a message client. send back data oh yeah send back the same data to the user and what is the user sending currently the user is sending hello world how should you update the application you should you know add a third state variable call it const message comma set message you state empty and then anytime the input changes on change equal to set message e. target. Val and then socket. send should say message send whatever is in the input box whatever is in the input box send that to the the server will respond back with the same thing and you will see that you know in the state variable so if I save this and refresh now my server died let me restart refresh I see one client has connected let me connect another client let me open this in another tab I have two clients connected let me send hi there from here hi there from client one send I get back hi High there from client one here the other person also gets high there from client one so the message is being broadcasted to everyone have I created sort of a chatting system yes I have send some message comes back here comes back here as well that is how you write client side logic um in react very similarly you can write it in simple HTML CSS app you can write it in nextjs as well the one thing you should do is you know clean up in the effect do a sock. close here because when you're running react in you know strict mode this gets run twice what if this component gets unmounted what if the user is on your website and they go to a different page the non- chat page the websocket connection should close then so that is why you should do this cleanup as well but that's pretty much it I'll take a pause here do we understand how to add it in react because if you understand this adding in nexts is a piece of cake we do understand this beautiful so let me I mean we at 8:33 we can actually go through nexts but we can also answer questions because next is fairly simple um let me take you through it really quickly in nextjs just create create a fresh nexj project update page. TSX to be a client component the only thing you have to make sure is if you are using websocket logic make sure it is a client component why hirat because if you don't make this sure your next server I repeat your this is your next server and this is your end user client your client will be like okay can you please give me my landing page if you don't make the specific component that is using websocket a client component what will happen your next server will create a connection to the websocket server we don't want that we want the end user to create a connection with the websocket layer so the only thing you have to keep in mind is especially if you're using you know client server components is make sure it's a client component wherever you're creating a websocket connection I did not try this without a client component if if I had to guess one of two things will happen if you don't make it a client component either nextg will complain it will be like web soet is not supported on the next server or it will create a connection between the nexj server and the websocket server if someone has tried that let me know what happens um but yeah other than that everything is exactly the same your page. TSX has logic that looks like this you can have an effect that you know creates this there was another call out here okay you know you can actually convert this to a custom hook called use sockets um you know something like this function use socket to make your logic slightly cleaner yeah the only problem there will be how will you set the state variable maybe you cannot do this yeah you can basically do something like this and then yeah this will be somewhere else you can create a custom hook called use socket which has all of your you know use effect logic socket initialization logic Returns the socket and you can simply be like const socket equal to use socket over over here and then you can add an effect over here that does this socket. on message do whatever you want so but yeah I'll pause here um the only thing that's left after this was scaling websocket servers um it's fairly straightforward whatever you think from first principle is the right we to do it this was done at gather town this was done at backpack when we had chat this was done at backpack exchange this is you know how you do how you scale websocket servers you scale them horizontally you the thing about websocket if you have a million users if you have a million users my this is 1 million users how many HTTP servers do you need to support them not a lot why herat because not 1 million of them are sending requests together maybe 5,000 are sending requests together if there are 1 million users currently active on lead code.com not everyone is sending a request I I sent a request initially now I'm just chilling I'm just you know scrolling the page I'm not sending any more HTTP requests whenever I change the page I will send some more HTTP requests so scale scaling websocket sorry HTTP servers is much easier you don't need a lot of HTTP servers to support 1 million users but if you have 1 million users in a realtime application but there are websocket connections then your server Fleet needs to you know be connected to 1 million users a single server won't be able to handle this you are persistently connected to everyone there are benchmarks around you know how much can a socket iio server support how much can a nodejs websocket server support I think it's around 10 to 20K users you can't support more than that if you have a million users and you have a websocket server you will need a fleet of websocket servers a single websocket server won't be able to support it especially if you're using nodejs where you know most probably it'll be on a single core if you're using rust on you know on a 32 core machine then you probably might be able to support more but yeah a single machine running noj can only support you know 10 to 20,000 users so if there's a million users you probably need you know whatever 50 to 100 servers so that is how you scale websocket servers the problem comes the problem comes when these sessions are sticky I repeat when these sessions okay these are a million users first 50 First 10,000 users are connect Ed to ws1 Second 10,000 users are connected to websocket server 2 third connected to websocket server 3 now let's say this guy sent a chat message this guy sent hi there and one of his peers in the same room is on another server there is one user connected to websocket server one another user on the same room connected to a different websocket server how will you transmit the message from here to This websocket Server back to this user I repeat if you have sticky or if you need sticky connections what are sticky connections there might be you know 50 people in the same room sticky connections means okay you will make sure are even if there are multiple people you know one present in India one present in us one present in UK if they are part of the same room if they are part of the same you know you can go to gather. toown and and create a room for your team your company and all those people will be part of the same game I hope you understand what a room is it's like a chat room it's a place where messages need to be transmitted to each other for example in this room there are 1 2 3 4 5 6 s people one of them is say s so cute so we have seven people here all seven people can either be part of the same websocket server believe it or not that is how it happened gather until I was there okay you know whenever a new room was created all the users were connected to the same websocket servers it was known that they don't support more than 200 users a single websocket server can support 200 users so you know they would create sticky connections okay by doesn't matter if you're from India us or UK you will all connect to the server in UK let's say this is one way to scale websocket servers this is called sharding you are sharding the users okay yeah based on your room you will be connected to the same you know websocket and we don't want to horizontally scale this and you know that will just complicate things so if you're in the same room you all connected to the same server this is one ugly way to scale websocket servers by sharding them okay by people in the same room connect to the same server what is a better way yeah doesn't matter what room you are in doesn't matter what room you are in if you are in UK you will connect to the UK server if you are in US you will connect to let's say you know the US server and if you are in India you will connect to the India server even if you are in the same room this is a better way to scale what is the problem in this the problem is if I move up on this gather Town page if I move from here to here this message needs to go from me in India to my server in India to a server in the US to my peer who's connected to the same room in the US need to go from one web stocket server to another which is why this is harder to scale but the better way why is it better because I can be close to my India server us person can be close to their us server UK person can be close to the UK server and you can scale Beyond 10,000 users if you have sticky connections if you're putting everyone in the same room on the same server you are restricted by 10,000 that number cannot have more than 10,000 people in a single room but if you want to have a very big application with a million users in the same room is when you know this architecture works better harad how would this architecture work there are multiple ways the most easiest one is you create a pup sub whenever a person from India connects they tell This Server hey I am part of room one websocket server tells the pub pubsub pubsub stands for publisher subscriber we'll probably discuss this today it tells the pubsub K by I'm subscribed to room one if someone sends you a message on room one tell it to me so I can forward it to harira in India and let's say Alex connects from us most American name I could think of and he also says bro I am part of room one now anytime Alex sends a message to the second websocket server the websocket server publishes it to the pub sub and then the pubsub is like who who else needs the messages from room one India server you need one there you go Alex has sent hi then the high reaches here then you send it back to hirat that is how you scale horizontally the A Better Way by not using sticky sessions both are fine approaches gather was valued at a billion dollars still use it you know the very the sticky approach it worked for Gather because you know there was a very big restriction of 200 people per room because if you have more than 200 people it's very hard to support audio and this app supports audio as well people can talk to each other so for them it made sense but if your application needs to grow Beyond 10,000 users like Discord then you probably need you know a distributed system that's non sticky but with that I will do a poll and then I will answer some questions that was probably the class for today um tomorrow let's do message cues Pub Subs create some you know sort of maybe try to create a system like this but yeah all right that's pretty good 92% is great um let's look at Q&A I just open chat I think please share code of WS in react it's right here um create a fresh I can push it to GitHub if that's what you want but yeah it's right here just create a fresh app and then you know replace app. TSX with this code and you're good to go all right I'm going to open chat let's just answer easier to track questions there all right any questions guys lot of questions feel rate fate feel F rate and okay let's see notion dock motion dock mine is disconnecting I'm trying to use HTTP only on hopscotch do you see any server logs is your Port correct uh are you what how else can you debug um yeah if look at the server logs is your server crashing most probably it might be um I missed some classes about this web3 cohort whenever it start un right now why are you dancing so much today I don't know am I how does how it works in group in multiple users what does that mean can you elaborate on the question should we learn about soet and websockets is it not worth learning socket IO there's nothing to learn in socket iio if you know websockets you you'll understand the code of socket IO vice versa so both of f how do I create rooms and one-on-one connection between two clients yeah so as I said socket IO lets you create questions uh sorry rooms very easily this does not um if you look at the socket code especially you know the server code over here there is no easy way for you to create uh rooms you have all the clients like all the people currently connected on the WSS do clients array but you know you have to tag them you have to maintain a global variable that will store this client is in this room this client is in this room if you want to look at a repository we actually did this in a YouTube live so if you go to real time chat app you will see the index.ts has you know a user manager you create a user manager class this is like a little too optimized you can just create a global array or global object where you store okay by this user is part of this room this user is part of this room and then you broadcast the messages correctly if you want to look at some code this is the code it will be hard for you to understand if this is the first time you're learning it but this has you know a lot of the logic that you're looking for how to create rooms or you know one-on-one messages um notion do can I create rooms and one one answer that how how to make project from react and nodejs how do I find ad hoc startups how much the price of F3 code price I don't know how how to make projects from react and nodejs how to find adog startups adog startups usually come on uh Twitter and uh projects I'm assuming your question is how do you find projects um I don't want to name one but not there aren't a lot of people on the call dub. is a great one to look at that guy is also pursuing it fulltime now I don't know if he's hiring but that codee base is prime if you look at that code base that's pretty good um and then you know a lot of code that we are writing is also you know the same that's written in a lot of companies that use that specific stack um Lua file for neovim I did not make I made zero changes from lazy Vim exactly the same I'm about 20% your coding speed how to uh grow coding speed uh so yeah it comes from a lot of practice um write a lot of code and and it naturally comes I've never used you know typ racer other than you know with friends there's no hack around it you just over time it happens uh WhatsApp clone I'm probably one of the best loans for websockets um websocket versus web RTC depends on the application for example if you know for Gather town as people are moving around people move around very often you know this person might move everywhere does it matter if the event a few events get missed what if this person moved here here here here here and a few events got missed in the middle it's fine because you know as long as their eventual location is transmitted it's fine if a few events get missed so for an application like this web RTC might work why because web RTC happens over UDP it can happen over TCP as well but UDP it you can send it over UDP the benefit of UDP is it happens faster the downside is a few events get missed but does it matter if this chat bubble that's coming on top doesn't matter if you don't see it doesn't matter so you know it is a choice okay you would use websockets when you need TCP when you need to make sure event goes to the other place like a chat system you can't miss a chat message but can you miss a movement message yes you can if the person is moving around very quickly send all the data via UDP or web RTC when they stop for more than 2 seconds then send the final message via TCP or websockets and that's you know how you can decide whether to use web RTC or or websockets more questions I I'm looking at the notion dock very soon notion dock notion dock answered the rooms question can we spoof message EV event sent to client in another way can we spoof message event no no no also run the yeah you just just how you create an https layer on top of HTTP on ec2 machines on ews how did we do it by creating an engine X server that has your certificate and runs on https I not done that video CB video I haven't done but just how you convert an HTTP server in Express to https you can convert an HTTP web s server to https the steps are exactly the same um how does a game like Counter Strike or FPS work great question there is one extra thing that happens in these games everything else is the same if there are 100 people currently playing Counter Strike firstly they have sticky sessions from what I know why do I know this because you know a few friends used to prate in college and they used to say by a server Bal I have made a server is what they used to say so I think there is the concept of a sticky server whenever you're playing counter- strike everyone is connected to the same websocket server or you know real-time server maybe they don't use websockets they use something else but everyone is connected here the extra thing that happens is there is some game logic on the server I repeat the server itself stores when you send it a message right I am moving up I'm moving forward the server maintains K harat is at 2 Comm 3 and then someone else does the same thing it maintains Aman is at 2a4 Raman is at 2a4 so on and so forth why why do you need to do this because what if the person tells you can n NE I am at uh 200 comma 200 they can spoof events right so everything needs to be maintained on the server and especially when you shoot I repeat when you shoot someone I don't know if I should say this in class when you try to play with someone whatever you get the idea you shoot someone you send the server the logic the you know message that says I shot person from position 2 comma 3 or wherever you are with this rifle and then the server will calculate does this mean the other person dies or not the server is the source of truth because you might be on a very bad connection you might be on a very good connection what if you send you know shoot but your you know opponent moved a bit who decides someone get everyone needs to have the same state when you're in Counter Strike if there is some latency between this guy and this guy and you know you said shoot and this guy said shot from this position but this event reached this client after 2 seconds then the person would have moved so this person will feel like K he did not die this person will feel like he died so on and so forth so the one extra thing that happens is the server is the source of Truth for a game like Counter Strike where when you shoot someone the server will be like okay based on the current positions this person is dead and it will tell everyone this person is dead it will not tell everyone shoot two three rifle it will not tell everyone shoot 23 rifle it will tell everyone a shot was made whatever but it will also decide whether or not the other person died you will not on the client side decide sh other person position it will die or not the logic to whether or not someone has died whether or not you have picked up a bomb needs to happen on a server so yeah that's what happens in real time games when you create Real Time games the only extra thing that happens is your server maintains a lot of logic your server is the source of Truth on who won and who lost and who shot and who got shot and your clients never decide that your clients are never like has shot happened from here so I will decide is the other person dead or not so the events are a little extra than just broadcasting it to everyone good question you should read about multiplayer games most multiplayer games are made like this how do answer that socket. send and client. send in the server code so firstly doesn't matter if you call it socket or you know client it's written client over here you can call call it socket let me make this even you know easier to understand if you look at the websocket server if you ignore all of this Logic for now this WS variable is one thing the Ws variable is one thing as people connect to you I repeat if this is your webset server and this is a client a browser that connected to you as as soon as they connect as soon as the websocket connection is open control will reach here you will have access to a WS variable a socket variable you can call it whatever you want and Y you can be like yeah as the user sends me messages as the user sends me hi hello move up move down if this is a Counter Strike game control will reach here I should write logic here to you know write to the DB that the user has moved if this is a counter- strike game if this is a chat game then write to the DB that a message has been added that's the logic you should write what else should you do you should tell everyone yeah herir has moved up a bit or har has given a message so what do you do you iterate over all the clients they might be other people connected right you're not the only person who's connected to the Counter-Strike game your friend might be in the same room another person might be in the same room though what do you do you iterate over all the clients WS do clients or whatever the logic was WSS do clients you can also know maintain a global array you can be like const connected clients equal to Mt array and then you can be like whenever a connection is made connected clients. push WS so you can you know maintain your own Global array you can also tag the users with you know this person is part of this room you know something like like this this the type of this arrays WS the web soet connection and then room which is string so as people connect you can you know store their websocket logic as well as the room they are in for example you know Counter Strike room one not push oh sorry aray and then when someone sends you a message okay by I want to send you know this data to this room you can get all the users in that room okay by uh connected clients do you know you filter all the people in that room or whatever and then for each and then for every client you send the message client. send herir has moved or whatever oh client. ws. send you know only if client. room equal to to harat something like that so yeah the client do send or you know more specifically the ws. send is can by send this user back some data whenever someone sends you some data you broadcast it to everyone that is what this logic does okay for every connected client where does connected client get populated from from a global array that I've created over here I've given it a type which is why it might seem you know difficult to understand but you can give it an any type bro doesn't matter very simple array whenever someone comes add them to this specific room and you know this can be based on some logic based on ws. current route you can like when the user connects they can also send you I want to connect to room number one or room number two they can send you their cookies so you can do all that authentication logic here um and then you know anytime there's a message you just broadcast it to everyone um does WhatsApp use websockets I think so I have read this somewhere most probably they do like 99% they do how can we do something similar in Postman that we in post I don't know I don't know if Postman supports real time someone told me recently they do we can try to figure it out like this is how I would figure it out I would just open Postman and then yeah I don't see anything here let's see post get there is a way I'm sure there is a way I don't know what is the way but there is a way I mean maybe there's not a way I don't know maybe it happens after you sign up lightweight client explore yeah just Google it if there's a way you'll find it let just use hopspot um open source repo allow us to look at the code structure of wsf possible standard websocket in nodejs in an open source repo I mean depends on the definition of an open source repo I showed you the real time server that has a pretty decent you know architecture if you want you can look at it a lot of this was in backpack it got removed eventually but you know yeah it won't be here now um but this also had backpack had chat at some point a lot of this logic was there um is there another repository that has websocket logic uh I can find one I showed you one already that's also pretty complicated if you know your goal is to learn so you know you can just go to go here and try to look at how you know user manager things like these are structured this is also a lot of code so so yeah you know there is a message Handler that I pretty much shared Whenever there is some data that comes in connection on message it goes to a message Handler that actually checks oh is the type of message join room is the type of message send message is the type of message up vote message what we were building here was you know something like this okay we wanted to create a chat system where people can come and you know send doubts and then a lot of people can upot and the most uped doubt is what you know comes here to this other section that you know the educator can look at that's what we were trying to build um but yeah you can read through this code um let's say Among Us game chose either server in Asia us UK and they'll join the room and so that mean most use sticky connection yes I would assume so if you have less than 200 users among you us has five users you will most probably there's no point Distributing that's over engineering right there if a company that's worth a billion dollars is not doing it why would you want to do it it I'm not saying it's not optimal it is optimal the optimal thing is you don't have uh sticky connections okay you have a distributed websocket layer but do you need it for five people no for example if you're Crea a web RTC call if it is between two people if it is something like omegal person one here person two here should you use peer to-peer web RC yes can you create a distributed SFU architecture yes should you no this is good enough it'll give you a lot of benefits it'll give you lower latency these guys will be directly connected to each other they'll share video directly with each other so on and so forth if you're creating something like gather to where you know in a single room there are 100 people sometimes a thousand people talking to each other sharing video then you should do distributed SFU architecture so you know it's about the choice that you make with that oh shoot I closed all the notion docs let me open them again oh God Zoom is so hard to navigate okay there you go rest versus graphql it's just different ways how does rest work you send a post request with some body and some headers to the server graphql gives you a different way to send data to the back end a specified way why is it beneficial it is beneficial because if you're using graphql and if you have a lot of relationships in your back end you don't have to round trip okay what is an example of this good example is if you are creating Facebook or even better swiggy any application when the user first comes to the website swiggy.com you usually send to the server by what is this users details you get them back then you send a request what are the restaurant's details you get them back then you send a detail his order history you get them back you do a lot of round tripping versus what if you could describe to the server okay are I am a browser client please give me just the name of the user uh and the order details which should have you know the order type and the restaurant name and give me uh you know the current restants give me their name and their rating what if whatever client was sending a request can be like if this is the graphql server can be like yeah I need data in this shape and then you know the mobile app might be like yeah I only need data in a different shape I am the mobile app I don't really need the user to see their order details so let me just skip this what if your graphql server had a single endpoint that could support both of these rather than you know you creating two different endpoints one SL API SL V1 SL web and another endpoint / a// mobile you create a graphql server you tell the client by what whatever you want pay give me in the payload what you want I will respond back with the same this way you can minimize a lot of the things that the server is sending if you have a single endpoint that I will just have a single endpoint it returns everything you can but then the mobile app is getting order details which it doesn't need doesn't need to show it so this became a problem at Facebook because Facebook very big company they realized you know every client is sort of needs its own shape of data less data more data so they created graphql it just lets the user specify okay I want data in this shape server response back without you having to maintain you know 10 different endpoints for web mobile so on and so forth can you explain this this what does this say it says okay ws. on error error if there is an error call this function you can either be like by call this function with the error which will do whatever error came plus error you can either do this or you can be like by whenever an error comes Now call this function called console. err so what will happen whenever an error will come it will call console. ER rather than calling this function it will call this function you can also do this look at the function error call back rather than doing this okay by whenever there is an error please call error call back we are saying Whenever there is an error please call console. error so what will this do whenever an error will be thrown it will call console. error with the error it just makes sense to do this right why do you want to create a fresh function which then calls the same console.log or you know console. er error if this call back is just going to do this and you know the argument here and here are exactly the same then just pass console. error over here you don't need to create a fresh function which will get an argument and then pass the same argument to another function just give the function name here it will call it like this websocket connection error error error oh there you go disconnected sir you are creating a websocket server connection WSS needs to be WS H build application called hello world very nice high level overview about integrating time scale into a graph fancy loading animations in our PTM application dashboard have you made this very cool or uh are you just asking either way high level overview is time scale DB is a place where you can give time series data at this point this happened at this point your portfolio value was X YZ so on and so forth and you can also tell it we are maintain a 24hour chart maintain a 3our chart May a 1 hour chart 7day chart cash all of those values for you it'll store them you know in various databases or tables okay this is the 24-hour chart for soul and this is the whatever you know one minute chart for soul so on and so forth and then when the user asks because eventually the user will come to your you know website like this and they might be like or a better example here might be coin market cap they come to your website and they might be like I want to see the one day data I also want to see the 7-Day data okay it looks like this let me look at the one month data let me look at the one-ear data so you don't want okay you know there's a single database that has all the entries you want to have a one-ear database a 1 minute database a 7day database so that this data is Cash you just get it from there that is what time scale or other time series data databases help you with please have the join but I can't go make sense all right more questions and then we'll oh 07 have to rush uh notion question one notion doc two three all right apart from early stage startups in Javas types is used in the industry in large ORS def depends on the definition of large it is definitely used in front end everywhere if you join Google if you join Amazon if you're writing front end code you're using typescript um back in code is tricky in Enterprise companies probably not pro I mean we use a little bit of it Goldman so you know it's very nuanced as to which company uses it which doesn't Java is more popular in Enterprise companies but big companies for example you know depends on your definition of big right c town is Big billion dollars but it does use not ja um a lot of companies you know are not worth a lot of them still use rust so it really depends on the founding team usually is jumping between Technologies doable yeah I don't know why people worry about this too much you know it's very easy to pick up a new stack if you're very good in one then you'll very easily be able to move from one to other does most organizations have most startups do organizations is a big thing most important how do you build your connections if you can't go to conferences uh I never go to conferences just in case um I go through a lot of Open Source Code and you know talk to people there there are it's very hard to reach you know a Bollywood celebrity it's very easy to reach a coding celebrity if you contribute to their code base if you contribute to probably able to you know reach the maintainer of it if you contribute to ethereum you'll probably be able to talk to balic so it's actually that's the best part right of tech okay the most the biggest celebrities are actually very achievable provide you code and you know I I hate going to conferences can you please give an overview of the graph section is built in a the graph um most apps like these use something called trading view SO trading view provides an API for something like this you know no one's building this in house it's very hard to build this in house um so yeah and this has a very clean easy API they have a premium version a free version to you know do this so yeah graph is actually for a trading app like this dedicated to Trad everyone I think including maybe not Zera but like most of them use this how do you learn a new technology new tutorial coures available yeah yeah so this advice remains very same very very concrete from my side okay you learn a new technology by building something big in it if you lucky you will find a company that's building something big in it for example if you want to build learn AI you join if you're lucky you'll join a company that's early with very smart people who are building it and you'll everyone's figuring it out you know it's not like there's a playbook for anything you know how do you build AI agent or you know whatever co-pilot thingy there's standard guidelines how you would build in from first principles you will learn it very well if you're working at a company that's doing it and you know it has Founders that have background in it so that's probably the best answer I can give that's the best way to learn uh but you know I get that not everyone can have that opportunity so then the second best thing is just looking at open source projects and you know trying to solve issues there what between web soet server and web soet client what is the difference between express Express and fetch Express is a library you import on a server fetch is a request you send from a client so what is it difference between a websocket client it gets initialized in the browser what is the differ what is the websocket server it gets initialized on the server what is sticky connection I said sticky connection is if you have a fleet of servers if you have multiple websocket servers and you have a restriction K are users in room one need to go to websocket server one then your connection is sticky you need to make sure okay you know if all of this is in front of a load balancer I repeat a lot of times you might have a load balancer you don't give the end user you know three URLs ws1 do something WS2 do something you give them a load balancer URL WS do whatever you know random website.com your load balancer needs to make sure AA request is coming for room one I need to send it here if someone is connected to Room 2 then where are all the room 2 connections they on webs three I need to send it here so on and so forth that's what a sticky connection means um in the real life you said you would message would be broadcasted to everyone I'm confused because message should be transferred to recipient right no if you have a chat application if this is my chat interface and I type hi there over here this message should reach a websocket server and it should get broadcasted to everyone who part of the same room right if someone else has a mobile app they need to see your high their message over here and then if someone else has it they also need to see it so it needs to be broadcasted to everyone what makes you think it does not can you answer that same question someone else answered asked um oh I have to think about this now okay cool chat um okay I'll open it one more time guys I have a hard stop in maybe 5 10 minutes but happy to answer answer quickly a few more questions what is the difference between polling and dos attack in terms of handshake pattern polling is when there is no nothing malicious in polling if this is a websocket server one of your browser is sending request again and again this isn't malicious this gu just like if you're on lead code you're just asking is my submission done is my sub Mission done there is nothing distributed about it DS May what is the biggest word first D distributed denial of service why why is D very important because if the same person sends a lot of requests you can rate limit them you can be like sorry you've sent too many requests when there is a distributed attack when you know the requests are coming from various IPS all around the world together is when it becomes a distributed denial of service attack here you maliciously send requests again again and again that is what DS is what is polling yeah can can I get some data okay I cannot can I get some data okay cannot you're not Distributing and you know targeting a server you're just trying to access their servers you are being a little impatient and sending a lot of requests but that's fine you can be always be rate limited when you do a distributed attack it's very hard to rate limit because messages are coming from so many IPS so you know that's why DS is different from this one oh shoot I lost my chat AL I'll stop sharing um can we please Skip One Week class if everyone yeah I think I'll do a poll I think most probably people will select let's skip it um yeah before 1200 we we'll do it even though this is like sort of the beginning of 1200 but yeah I think next week will be a holiday most probably we'll do a poll uh there you go post man supports websockets now so you can read at that blog post uh honest feedback please consider let's consider static generation ding routes yes I this someone posted this which leads to this to cash Dynamic routes during build added this does not cash for McQ or code type questions problems only notion blogs that makes sense oh you added a code can you just create a PR actually um H and also like can it not do it for McQ questions can you not have a part of the website statically generated the answer could be no but I think you can I could be wrong okay you know if this is an McQ question that I open also just create a PR uh and you know happy to create a bounty for this if I create open this page and youit has an McQ and you the right side is dynamic the left side is not Dynamic can you statically the left side the answer could be no but I think you can let me know either way feedback the key to learning is feedback public feedback decide to move forward but sometimes it creates ambiguity regarding DSA polls online glasses recording of the Creator but her chose bad quality Zoom recordings are the wor I know I don't think I'm doing this this is the first time I will not follow polls um and you know for two reasons number one yes very hard to zoom recordings are a pain especially if it's recording of a live class to understand I don't think anyone will go through this um and uh yeah exactly exactly I get your point and it's not happening second thing is it's very expensive it's almost seven eight times expensive to get a Cort from someone uh versus you know just getting an instructor and I'd rather spend that money on bounties I think this is a web dev Goot if we are going to burn money rather burn it on something that helps everyone learn the thing that is the goal of the Cod DS is something extra I think a small population might uh be you know might help out from home scor especially college student looking for C++ classes I think that's why you know but yeah yeah yeah I get your idea rectangle is more as useful it's called Ma look at that cool all right uh what happened to the merchant app in the patm project if you're completing it can you please write the guidance yeah yeah so the merchant app needs websockets which is why we've done websockets today probably in an offline video I can there are a lot of offline videos that have piled up um let me complete them first and you know one to like 0o to one people can still stay as I said until the end of the month so if you push a little bit more and tag me on Discord there's a good chance I make you know a video on the merchant app um it'll be difficult like it'll be like 3 4 Hour video um share some Rust resources the Rust book best one um okay so that uh I'm not excited I just like this man it's just easier it's more productive it's more you know something new you learn everything else is you know everywhere there which is why I'm a little excited it's also easier as I said because you know it's simple stuff it's not a lot of code but it's when we reach kubernetes you will actually see this run five commands and you have something running on the cloud things like this so yeah uh uh what else cool I think that's it we'll call it here guys have to meet someone but this was great thank you for joining let's do some more uh pup up system messaging cues tomorrow all right see you guys bye-bye I am recording cool guys I'm a little not too much but a little under the weather so we'll see how long the class goes um today's class if you've gone not gone through yesterday's class it'll be slightly difficult I wouldn't say it'll be too difficult U because you know we'll be reintroducing everything but this was yesterday's class on Advanced backend communication and websockets today's class is on we need better search over here today's class is on redis Pub subs and messaging cues so let me share the link to the slides um but yeah if you did not go through yesterday's class today's class might feel overwhelming at some points but I'll try to take everything from the basics just to give a brief about yesterday's class yesterday we understood how do you do Advanced backend communication more specifically how does one backend system talk to another backend system this is something we did yesterday I sort of introduced Pub Subs as well as messaging cues yesterday today we're diving deeper into pop subs and messaging cues what did we discuss yesterday HTTP we already know websockets we discussed yesterday even though websockets aren't really used for backend to backend communication they are usually used for frontend to backend communication Pub subs and message cues are almost always Maybe not even almost always always use for backend Communications for two different backends to talk to each other you will never have a browser over here publishing directly to a pubsub you will never have a browser most probably unless I'm missing some very weird use case publishing directly to a message Q it is for communication between one back end and another backend harat can you give me an example we gave two examples yesterday a PTM example and a lead code example today we we will dive deeper into the lead code example what are we learning today we are learning Q's Pub subs and redis if you want to code along you will need Docker on your machine because we'll be using Docker to start redis so if you don't have Docker might feel a little you you might not be able to code along you can still figure it out by getting a a redis instance online on you know Aven or some other provider but I would just say use Docker because even if you use AVN you will still not have a few things locally that you might need though please make sure you have Docker installed and what are we learning qes pbubs and reddis more specifically we are learning how would you build a system like lead code how would backend communication happen in a system like lead code where does a q come into the picture where does a publisher subscriber come into a picture har then why are we learning about redis because redis gives you a bunch of things two of those things being Pub subs and q's harir can I do Pub Subs without redis yes you can there are many projects or you know libraries or implementations that let you do Pub Subs what is the name of that I'm forgetting Q's also there are many implementations rabbit mq is a good uh example of qes what is a good example of Pub sub is SNS a good example of Pub sub probably simple notification system someone can notify other people can uh listen onto notifications maybe not technically true SNS is a service by AWS but if not SNS there are various you know Pub sub implementations redis is the is one of them Redd let you do puup Subs what are Pub Subs we will get to very soon let's understand in a lead Cod like system where do you need cues and where do you need publisher subscribers The First part the final architecture of lead code is sort of complex this is the final architecture before we reach the final architecture that has both the pubsub and the Q let's just look at a simpler example with just the Q the architecture says anytime the user is on your platform for example the user is on lead code.com and they try to submit a problem let's say I open the two sum problem over here anytime I try to submit the problem a request should go to the primary backend of yours harat what is the primary backend your Express server simple Express server that received a request from the browser K by for problem id1 or you know in this case it'll be problem ID 2- sum code is some code snippet in this case the code snippet would be this and then the language in this case is python but here whatever the input might be this input reaches a primary backend and this pushes it onto a q why does it push it onto a queue we have the code of the user we should just execute the code of the user here tell them the response yeah there you go your code succeeded or failed the problem is the reason you need to delegate this out is that what if the user sent you a code that's malicious what if they sent you something like this while one or whatever is the way to do while Loops in Python C++ c equal to Z something like that what if they sent you code like this you will take their code run it on your primary back end the CPU of your primary backend that is supposed to serve other users is now being used to run the users code which is why you delegate the users's code to someplace else you say by your code will run someplace else let me take your code and send it to that worker that is why it says W1 here because this is a worker node AR why is it called worker because its job is to work on something to pick things do its thing and then you know respond back and then picks the pick the next thing do its thing respond back so on and so forth hence usually these are called workers the primary backend puts this onto a queue it puts your problem onto a queue and your workers pick problems from a queue why do you need a que why can't my primary back end simply tell worker one k by please execute this code that way I have delegated it is not running on my machine there is no security issue the reason you cannot do that is because what if 100 people send you requests if 100 people send you their code you don't want to send them to you know the same worker which only has 4 GB space you want to guarantee whenever you're on lead code lead code guarantees okay your code is running on a two core machine with this much CPU so on and so forth you have to make sure those guarantees are met you have to make sure every worker runs a single code at a time and cues become a very popular use case in this case yeah even if you do if you have only two workers and 20 people submit problems you will have a long queue and you know workers will pick them up slowly a lot of people will be waiting for the response but you can guarantee can worker one will only pick one item at a time worker two will also only pick one it item at a time it will run it it will store the final response somewhere and then it will pick the next thing so to make sure you pick things one by one to make sure if your primary back end is overwhelmed is if every everyone is trying to do a very expensive operation very fast you can smoothen out that operation okay by let me push to a q slowly we will do it if you submit on lead code takes some time test results because there's a asynchronous backend process that's processing your code that is where you need qes another benefit of cues is if you have zero workers then also at least you know people's submissions remain in Q whenever you bring up a worker they start to get pick up picked up another benefit is you can Auto scale your workers you can be like yeah if the Q length becomes 100 I should probably start 20 workers but if the Q length becomes three I should probably stop most of them only one is fine so you can Auto scale your workers up and down this becomes super important because in lead code if there are 100 people on your platform you don't really want to bring up or you know keep 100 workers alive at the same time these are very expensive machines so you have to make sure you Auto scale them very aggressively qes are a good use case for it so that is one place you might want to use cues generally whenever you have an expensive operation that needs to run on a single machine another good example of this is video transcoding har what is video transcoding if I go to YouTube and upload a new video it takes some time for the video to process I upload a single 1080p video that video gets converted into four five different qualities this is called transcoding this is also very expensive operation here also you want K by whenever there is a user uploads a video on YouTube it goes into a queue and then workers eventually pick it up and transcode the video so that is another use case where a queue makes a lot of sense a good example of Q is rabbit m q sqs simple simple queuing service in AWS or redis also lets you do q's and the second part that we have to learn is Pub Subs so quickly to revise what have we learned whenever you have users who want to do an expensive operation on your machines a long running expensive operation be it submitting a lead code question or uploading a video for transcoding for converting into 720P 360p you probably want to use an architecture like this you probably want to upscale and downscale workers based on the length of the que that is where you need Q's where do you need Pub subs or publisher subscribers even before that what are Pub Subs what is a publisher subscriber the name is sort of self-explanatory let's look back at our example user submits some code sends it it to the primary back end sends it to the queue gets picked up once it is picked up the worker needs to tell the browser right okay you have accepted you have rejected it needs to send the final result to the browser if you remember from yesterday how does lead code do this lead code uses polling if I right click over here and click on network and click on submit you will see after the submit request goes through it starts to send a slash check request again and again and again and again it starts to pull the back end is it done is the submission done and once initially it get gets pending pending pending and then eventually it gets K by it failed for this specific error to lead code uses long not long polling polling but can you do something better can you use web sockets the thing that we had learned yesterday what is the benefit of using websockets here if you use websockets the browser won't constantly pull the back end bro is it happened has it happened has it happened it won't overwhelm your primary back end which will in turn overwhelm your database because you know every every at every step here we're writing to a database as well I've not kept it in the diagram for ease of understanding it but whenever you make a submission and entry goes into the database that says this problem is processing whenever the worker is done with it the worker updates the entry okay this problem has been accepted or rejected though there is a database here we not looking at it but when the user is long pulling then they're overwhelming the back end again and again okay by please give me result give me result give me asking it 10 times what if the server could push okay bro I am done your submission was a tle a Time exceeded and tells the browser so the browser can show it on the front end for this for pushing events from a server to a client what can you use something we discussed yesterday websockets so what can happen after the worker is done after the worker is done processing your code getting the response checking if it is correct it can send the response to a websocket server which is connected to the browser whenever I go to lead code.com what if there is a persistent connection to a websocket server and whenever if I'm on this page what if there is a along with the HTTP request that I'm sending when I click on this there is a persistent connection to a websocket server anytime I click on run and you know the worker finally processes it the worker can signal to the websocket layer K are please tell user with id1 if he's connected to you if the user with id1 is connected to you tell them okay the status of their recent submission was time limit exceeded that is how a worker can talk to a websocket server and publish an event to the browser hirat why did you not use websocket layer here why did you not sorry why did you not use uh messaging cues here why did this guy not directly talk to this guy why can't worker directly talk to the browser why are you complicating it with so many services good question a lot of questions let's answer one by one worker can never directly connect to a browser workers are very transitory they come up go down they should never be exposed over the Internet the workers's job is can bro give me a code I will run it and I will put the entry in the database Z say Z I will publish to a pub sub that is all I will not do anything more so the workers's job isn't to connect to a user workers anyways go up and down very quickly so you don't want your end user to connect to a worker so you have a fresh service what do I mean when I say service another nodejs application that the browser connects to your browser is connected to this websocket layer and whenever the worker completes the submission it can tell a pub sub okay Whoever has user ID one please send them this specific uh status for for their submission why do you need a pub sub Why Can't This worker directly talk to the websocket layer great question this is a beautiful question okay do why do you need a pops up why can't worker directly tell the websocket layer why tell the browser okay his problem is done this guy can talk to this guy via HTTP why does it need to go via a pub sub and the answer is something we discussed towards the end of yesterday's class okay in the real world you have a fleet of websocket servers you don't have one websocket server one websocket server can only support 10,000 users you have multiple websocket servers and your user could be connected to anyone they could be connected to let's say a persistent connection to ws3 so whenever the worker is done it doesn't know okay yeah should I send this information to this guy or this guy or this guy so what it can do is it can publish an event to a pub sub and then whenever a user connects to a websocket layer I repeat Whenever there is a websocket connection between a user with id1 let's say and a websocket server it can subscribe to an event called maybe you know user ID 1 something like that this guy can subscribe to an event called user ID one and the worker can publish if if the worker knows this submission was for user ID 1 it can publish to the pubsub user ID one has this submission that way this worker can directly reach the websocket layer versus trying to figure out are you the person who's connected to user one a you are not are you the person so on and so forth this guy is's like I will publish to Pub sub whoever wants it please subscribe from here it might happen that you have you know two browser tabs open or you know you have a mobile app open and the mobile app is connected to WS websocket one server you share a lot of lead code accounts you might have a friend who's in Maharashtra you might be in chandigar you are both sharing lead code accounts he submits you should still receive the notification you know your recent submission was successful or failure let's say to multiple websocket servers can subscribe to the same event and then this single event goes to person in Maharashtra as well person in chandigar as well why do both of them need only one person will be here assume there is a use case can whenever you have make a submission a notification comes here that says harir you have a successful submission irrespective of if you have this page open or not so if your friend in Maharashtra is on lead code and you make a submission here and it is successful they should also see a notification here as long as it's the same account you guys are sharing an account they should also see a notification here that says you know something has happened now I will take a pause because I feel I'm connected to a VPN which I shouldn't been but let me just take you through the final thing the final architecture one more time um and then I'll take a pause but this is the brief of a publisher subscriber system where you know in this case a worker publishes to a pubsub and whoever wants can subscribe to the pops up if we had a single websocket layer then the worker could directly talk to the websocket server but since we might have a fleet of them you probably want to go through a pupsa cool this is what the final architecture looks like there is a browser over here that sends the request reaches the primary back end goes to a messaging queue whenever the worker has the time it is doing its thing let's say there are only two workers he'll P first pick this one then this guy will just pick this one then they this will pick this one they'll sequentially pick up all of them whenever they pick a problem and they find the final solution whether it's an AC or a tle they tell a pubsub user ID 1 has done something whichever websocket layer user ID one is connected to receives this via the pubsub and you know send it back to the browser that is a brief of the architecture for today that is where you need messaging cues and Pub Subs now let's do a poll did we understand this to begin with yes or no only 77% understood which is fine we will do recap very soon second question is is my internet good yes or no it's good all right so let me look at Q&A really quickly let me spend maybe six more minutes on the architecture before we proceed to coding after this it's just a bunch of coding understanding ing red is understanding how you can create a pubs sub a CU um any specific question could you repeat the architecture let me try to repeat the architecture of something like lead code me recap it let's build the ugliest version of lead code the one that you might have seen on you know some other YouTubers website that was hacked by someone I don't know whom not me the ugliest version is this is my ec2 machine this is running a nodejs process and someone some user comes and sends a request okay I want to submit a C++ code that looks something like this for user ID whatever my I'm user ID one the ugly version says okay nodejs process will spawn another C++ process and you know run the code what is the problem here why is this approach ugly this approach is ugly because the C++ code code could look like this while one do something then your server is choked you are letting a malicious user choke your server what else can the user be write writing in C++ they could write something like this system of Ls the user came to your website selected C++ as the platform and wrote the code that looks something like this system LS they can explore your file system what does this do it runs the command LS on that machine which is running and if you're running it natively on your E2 machine then this C++ process that you're starting the final output will get returned to the user in the browser and they will be able to explore your EnV files your mongodb url your postgress url so on and so forth this is exactly what happened with you know one of the websites I was exploring I was like hm I can find like you know a mongod DB URL here so what is the learning this is the ugly approach Aras what is a better approach the better approach is yeah this machine doesn't run anything this is your primary back end you don't want to run any external code here you want it to run on a different ec2 machine but the problem is how do you know how many ec2 machines you need to have should you have one ec2 machine yes S one is good enough what if 20 people submit at the same time how will a single ec2 machine handle 20 submissions it cannot which is why what you can do is you can maintain a c yeah I am G I have just started lead code I am self funded I am bootstrapped I don't have funding I only have a single ec2 machine if more than 20 people or 20 people come to my website and submit they will slowly see the response if 20 people click submit first person will immediately get it second person will get it in a while 20th person will get it very late because they are one by one getting picked by my worker and this worker is on a separate machine if the user's code looks like this it's fine it's running on a empty machine this machine doesn't have any code if the user code has a while one then we can you know this machine gets choked for one second we can always of course have a timeout if the code of the user takes more than 1 second then you T it and you say the problem solution was t. time limit exceeded to one by one it gets picked up that is why you need messaging cues to make sure if 20 people come your expensive operation can still be handled by a single machine of course if this que becomes really long then you should ideally scale up the number of workers that you have that are running the end user code that is the first part of messaging cues what is the second part second part is once this guy is done solving a problem or you know um solving the end user code which let's say look something like this it reached this worker it was able to solve it it needs to tell back the user you have a tle your code look like V one you have a time limit exceeded how can this ec2 server talk to this user there is no easy way either to you this ec2 server puts an entry in your database K are the submission id1 status is tle either we do this okay whenever the response is done ec2 will set this in a database and the user is polling is my submission done is my submission done and then this node just process is hitting the database and getting back your status this is called polling this is what lead code actually does lead code actually does this we just saw it in action it was polling the server again and again this is one approach this is easy this does not use Pub Subs why do you need Pub Subs okay if you are using websocket servers if you have a fleet what do I mean by Fleet I mean multiple websocket servers that you can use to you know push events to the client your user will randomly connect to a websocket server and when it does it can tell a pub Subs system a publisher subscriber system okay I have user ID one if you have any information related to user ID 1 tell it to me so I forward it to him this guy is very interested in his own feed if there is any real-time feed if there's any submission that has been accepted if there's any you know um tle that has come tell me so I can forward it to him and then the ec2 worker will of course put this entry in the database as well but will also tell the pub sub yeah user ID one just got a submission done it was a time limit exceeded please let them know and then this event can reach websocket to another reason for having a pub sub is what what if you know there is another feature in lead code if you're on lead code there is another feature okay whoever is live on lead code there will randomly be a bounty here if you do more lead code if you are active on lead code all day you will randomly one day see claim $5 or claim premium and you can click on it let's say lead code introduces this okay if you are very active on lead code randomly throughout the day we will have a lottery and it will come for whoever is ly active on lead code if they want to add a feature like this all they have to do is again publish to this pubsub okay by irrespective of user ID like user ID any any user ID please send the lottery button so that whoever is currently connected whoever is currently on this website will have a persistent connection to the websocket layer be it this one this one or this one doesn't matter and you can say can by websocket 3 and websocket 2 and websocket 1 will all subscribe to the event user any if there's an event that comes for user any it will reach all of them and then you know they can forward it to everyone this might have gone you know above your head which is fine if you did not understand the lottery use case it is fine I just wanted to show you how you can have multiple Publishers also and multiple subscribers also but right now let's keep it simple let's forget the lottery use case the simple use case is the worker tells the pub sub this person has a tle and if the person is currently active on the website then the specific websocket layer is subscribed to that pubsub if there is no user with that ID then this guy will publish to the pub sub no one will pick it up because this user isn't even active if this user isn't active on the website you don't have to send them the event whenever they come back they will refetch their submission directly from the database that is how you do message cues and Pub Subs in lead code but the thing to understand is architectures the first set of architecture Remains the Same if you're building a YouTube transcoding website as well or YouTube transcoding system as well the second part of the architecture remains constant if you ever want to have a pub sub like implementation you know there are use cases of it chat is if you remember from yesterday I told you if you want to ever scale chat then also pubsub is a DEC decent way to do it but you can go to yesterday's video and look at that um now let me do another poll and see if this revision makes it a little bit clearer because if you understand the architecture writing the code is you know not too difficult it's actually fairly straightforward 90% good not great but you know I think I should proceed now and for the people who do not understand try to understand Pub subs and you know implementations of Pub subs and q's you can worry about this architecture later on let's learn redis this slide has nothing to do with the last slide so if you did not understand the last slide or specifically the architecture it is fine let's understand redis what is redis you might have heard this as you know a very big buzzword redis is an open source inmemory data structure store used as a database as a cache and as a message broker harat used as a database is it SQL database is it no SQL database uh why have we never used this why did you never introduced redis as a database to us why did we only worry about mongodb and post the answer is even though redis lets you store data you should never use it as your primary database why why should I not use redis as my primary database there is a video at the end of slide well some slide here whenever it comes um that explains this in very good detail it's a 20 minute video um there you go it's called I've been using red is wrong this whole time it will show you if you really want to use redis as your primary data store you can but you know it'll make your life very complicated harir if it you cannot use it as a database why do did they even introduce redis then you don't need redis what is the use case of it the use cases redis is used aggressively for caching data what does caching data mean caching data means if you have a website and this is the let's say this is the browser let's take an example of a website let's say take the website daily code the website that we are on projects. index sts.com it has let's say 20 tracks like these whenever you come to The Landing page assume this is a simple react Plus nodejs application nothing too complicated react plus nodejs plus postgress whenever your browser sends a request to let's say the back end okay by get request to get all the tracks the nodejs backend then forwards this request to a database your postgress database gets back the response and then returns it back to the user what if 100 people come to your website together then you are sending a 100 database calls what if a, people come together then you're sending a th000 database calls how often do you think this page changes this track only gets added on one on Saturday one on Sunday and now one on Tuesday one on Thursday for DSA classes that is it if this never changes too often do you really need to hit the the database harir yes I know the answer static side generation no static side generation is for next JS this is simple node js back end here the answer is caching you want to Cache the data Al what does caching mean it could mean many things you could Y in memory have a variable called tracks and then you know only the first time the user asks on this endpoint should you hit the database and then you should just cache it in memory what does in memory mean it means you know put it as a variable in your nodejs process okay yeah I got the values from postgress however you would get it like Prisma do you know tracks. getet or find first whatever your request was con tracks equal to this then just do you know globals do TRS equal to tracks or you know cash this request in memory in memory means in the nodejs memory you can do that and then all fut future requests for the next 10 minutes let's say you can serve directly from here from your nodejs back end you don't have to hit the postgress database anymore you can cash them for 10 minutes let's say that is one thing you can do or you can cash them in redis herir why would I cash them in redis I will just cash them in memory there are a few reasons firstly what even is redis like what do you mean cash it in redis as I said redis also lets you store data right so why not first time send the request here get back the response and cash it to redis and then in all subsequent requests just get it from redis harir if 100 people come I'm still hitting reddis 100 times then then what's the benefit I will hit postgress I will hit redis redis is a inmemory um let's see provides inmemory storage get in memory storage it is running a process very similar to a node just process here where your data is stored in memory so it's very fast retrieval compared to your primary store or you know postgress or mongodb that is why usually very often you will see redis as a caching layer rather than caching in memory you should cash on redis another benefit of cashing it on redis is in a real application you'll have more than one server because you'll have a lot of traffic so you'll have a bunch of servers you can do distributed caching what does distributed caching means it means if one backend server if one backend server got the data from post and put it in redis and now if another browser comes to the second person the second person can also just check from redis or the second server can also just check from redis and respond back versus if I had it in memory then you know this guy's memory would have it but this guy's memory would not have it you would not be able to do distributed caching but now you can do distributed caching because you have a separate redish instance running which has you know inmemory data storage where you're caching your data so that is the most popular use case of redis I don't know I mean this is debatable but like the most popularly I've seen redis being used in big companies or small is to cash data that comes from the database to minimize the number of requests that are going to the database this is a decent example of you know um how redis runs on your machine as I said it's a inmemory data store you should ask a very good question at this point K kirat you are saying it is inmemory Data store if this is running on an ec2 machine this redis server or redis process is running on an ec2 machine you are saying it's in memory which means it's similar to a node J process run here what if the process dies what if redis dies the data was in memory then the data got lost because you said it's storing it in memory the same thing would happen here if you did not have redis forget redis and forget multiple servers if you have even a single server and you know you cache the data here in a variable called tracks if your back end goes down then this variable is lost no when you restart the process you will have to hit the database and at least get it once to in memory data to gets lost in memory means it's in the process the process that is running it is stored over there it is not stored on disk toat how will what if R is goes down then all your data is lost firstly if that happens let's say if the data is lost doesn't matter it was Cash data it's not your primary store your primary store is still this if this goes down and assume the data also got lost it's fine it'll bring it'll come back up as a fresh instance we will recash the data over here we are good to go you had to hit the database a little bit more but that is fine so in case redis did not you know back up the data somehow to the file system your life would have been sad but redis does back the data up into the file system it still stores data in memory if you ask for it it initially immediately respond back with the data it doesn't need to hit the file system to get the data it has it in memory you ask ask it for data it'll return you the data immediately but in case it goes down it can recover back its state by doing two things to jargon to introduce and very commonly used jargon in systems like these where you know you need something to be in memory but you also need to be able to recover it I repeat you need something in memory but in case it goes down you want you can recreate the state again how can you do this let me give another example of this whenever you go to backpack. exchange and place an order let's say you click on buy or you click on sell or anyone clicks on buy or sell this request goes to a queue this request goes to a queue and from this queue it gets picked up and it's stored in a inmemory order book what is an order book order book is this thing you see over here whenever you place a limit order what is a limit order forget about it whenever some Traders Place some orders for example you know someone has placed an order I will buy one usdt for or sell one usdt for 1.3 and someone has placed an order I will sell one for 00004 Traders place these orders all of these orders are also in memory in a rust process they are not stored in a database they are in memory which means you know the literally a variable called order book equal to this thing because this needs to be very fast the order placement needs to be very fast so here also what if the rust process goes down then your order book was in memory all the data got lost so here also a similar architecture is used to redis to make sure if the inmemory thing goes down you can still recover the state the exact order book before it went down even though you're storing it in memory if it goes down you can recover it back how can you do it there are two ways way number one is here you maintain a queue of every event okay by someone placed a trade on you know an order to buy for this much and sell for this much since the beginning all the orders that have been placed you maintain them in a queue and if it ever goes down you just replay all of these events K someone came and placed one order removed one order so on and so forth you can replay all events and store them in a queue redis also lets you do the same thing redis lets you store data in two ways either append only file which means it will keep appending okay you added this data you removed this data in a very long file and then whenever the redis process goes down if you have to bring it back up you will take that file and you know run all the events from top to bottom and be able to recover the state that is one okay you literally have a record of all all the events since the beginning okay you know if in if you if your exchange has been up for one year you have the record of every order that was placed you can Replay that cue of messages and recover your inmemory order book can you guess the problem in this approach the queue will become very long what if this has been going out for a year will you re whenever the order book goes down it will take 20 minutes for the order book to come back up because it will replay all the million billion events that have been here how can you do something better how can you make sure fast if I was able to recover my order book it would have been nice and the answer is by doing something called snapshotting you can snapshot the current order book every 1 hour hour number one of our exchange being up or website being up first hour the state looked like this this array and then you know on 22nd Feb 2 p.m. it looked like this so you have snapshots every hour and you have the whole queue you can take the latest snapshot plus all the events that came after and recover the order book that is another way you can recover and inmemory variable and redis does the same thing you can do a redis database file the RD database file performs point in time snapshots you can tell the redis configuration can save 9001 save the data set every 9900 seconds if at least one key has changed what does one key has changed means means some single thing changed in the que someone you know put some data in redus or removed some data if at least one key changed in the last 900 seconds take a snapshot so you have you know snap shots of redis at every point and you can recover the data from here if you don't understand this it's mostly fine you know it might be asked in an interview but it's good to know if you want to ever build something like this if you ever want to build a inmemory data structure why would you want to build an inmemory data structure what do I mean by inmemory data structure I mean you know something that's stored in a variable in a nodejs process or in a rust process you want to do it because you want this system to be really fast that is why you might want to do it and what is the downside of this okay if the back end goes down if the node just press process ever dies your inmemory variable dies with it so you need a way to recover and how can you recover either you have a list of all the events since the beginning or you have snapshots every 1 hour plus the events in the last one hour let's say or last two days so you can recover using the latest snapshot plus all the events that came after and you can reach back your order book state in memory do we understand how red is persists data and how you know an inmemory data structure can be persisted even if it is not R is architectures are very similar in fact a lot of you know this was backpack architecture was inspired from [Music] redis 80% even if you don't understand this as I said it's fine no one really needs to know the internals of redis um I'm happy to take doubts at the end in fact happy to oh it's 7:48 so let's take a break only wow we've not done any implementation in the first 48 minutes to this is like feels like a design cohort now um take until so let's take a 12 minute break I'll answer some questions and uh until then please run this locally make sure this works for you because you know we'll start coding and you need Docker so you know you have 12 minutes install Docker let me look at um the thing what if the Q dies before a snapshot was taken do you mean the Q is unstable and you know if data gets missed in the que yeah you're screwed your data is you know you miss data but generally for example in backpack first the thing reaches the queue and then the rust process picks events from this queue so it can never happen if something reach the queue sorry something reach the rust process and did not reach the queue everything first reaches the queue then the rust process picks it up from there why don't you keep it in a hashmap rather than using redis I've seen you're doing caching in cmsy hash map yeah you can do that as I said you cannot do distributed caching then if I have two backend servers then you know you can't store it here that is one of many reasons you might want to use that is what if you have a very big variable then if you're storing that very big variable in memory you know your ec2 might run out of memory those might be the reasons you might not want to do in memory hashmaps um but it works like you can still do that once the snapshot is taken do we clear the depends on you you can clear clear the que if want if you want redis from what I know does not clear it redis maintains the you know append Only log forever from what I know if you if you enable a pendon log in redis it will maintain the log forever could be wrong but I think so same happens for postgress also postgress also has a concept of a write ahead file WF pretty much the same thing every event before it reaches the actual postgress database every event before it reaches the actual postgress database assume this is a post database it gets returned to a file what if the value in the database changes will redis cash change no that is the question exactly great point someone said Kat you are cashing data for 10 minutes what if the database value changes then people will see stale data for 10 minutes so usually what happens is whenever there is a right whenever let's say an admin comes and sends a request Okay add track whenever there's a right event you clear red as well you send a request to redis and you clear the specific thing so that whenever someone asks now they get fresh data from the database so another interview question in case you're interested now that you've you know you've topic if there is you know get requests you cash them in red if let me just make this cleaner this was asked to me in an interview uh very early in my career um let's say this is your architecture browser there are two browsers there is a user browser that is sending request to get tracks there is also admin browser I repeat there's an admin browser that is adding new tracks whenever you see a new track being added here there is some admin that goes to project. 100.com admin add clicks a few buttons clicks on submit and a new track track gets added over here now the question is K are you made caching forget admin exists you if user sends a request you cach it in redis only one request goes to postgress for the next 10 minutes and you know after 10 minutes the red is Cash let cleared and then you Reas database if on the third minute I repeat if on the third minute the admin sends a request to add a new track what should you do you have three options option number one clear redis put data in postest option number two update data in post uh in redis update data in postgress option number three update data in postgress followed by update data in redis from first principles try to answer this okay when the admin comes and adds a new track to the back end you need to make sure is gets the updated data right other else everyone will keep getting the stale tracks the new track that was added let's say you know DSA 3 track was added users won't see it for the next 10 minutes or the next 7 minutes if this happened on the third minute so you should do one of these three things either just clear redish so that whenever the user asks for it they get fresh postgress data with your DSA 3 tag it gets P put to reddis you get fresh data or you can update the data in reddis okay yeah a new dsa3 track was added please keep it here and then you can do the same in postgress or vice versa you can first send it to postgress then send it to redis what would you do option one or option two or option three think from first principles there is just a sing single answer for this I will do a poll very soon but the answer should be very obvious what would you do launch wow majority has failed most people chose option number number three some people second option is Choice number one choice number one is the right answer and then 32% selected Choice number one 42% selected Choice number three 24% selected Choice number two Choice number three two and three are wrong why because it might happen can this request succeeded update data in red succeeded and your backend server went down or in this case it might happen update dat and postgress succeeded and your backend server went down you cannot guarantee can both will happen your back end server is very unpredictable what if redis is not responding this time what if you know U yeah the server went down is the best example you updated data in reddis you told the user admin data update has been done but you forgot to update the data in postgress or you know postgress update failed for whatever reason then you have stale data in reddis which will get cleared after 7 minutes because or 10 minutes and then you know back end me postgress May the data never reached same for this guy one of them might happen one might not happen which is why the cleanest is clear the data in reddis if the reddis data cleared then store the data here if clear reddis request fails then don't do this just tell the admin sorry request has failed but the first thing you should try is clearing redis after this succeeds you should try putting data in postest if this also succeeds then you can tell tell the user harir what if first succeeds but second doesn't that is fine if the first succeeds clear red is succeeded put data and postgress failed you tell the end user sorry your update failed of course you cleared a cash key but that is fine it'll get refreshed in the next database call from the user the problem comes when you're able to update one but the second one fails then you will have to revert this yeah I updated in reddis okay dsa3 was added and whoever is sending get requests is able to get the data from redis but I forgot to or you know this request failed to update it in postgress I will have to revert this redis request you know we'll have inconsistent data and same logic applies here that is why I I'm hoping this is straightforward um doesn't starting a new ec2 machine takes time do we use AWS Lambda to mitigate this or something else infrastructure so it does take time there are a few options here number one you don't usually use EC to Mach you don't bring ec2 machines you have a kubernetes cluster where you bring up more pods and bringing up pods or you know Docker containers is much faster than bringing up this thing you have compute you have 100 machines running and you know as people come to your website and submit you start small Docker containers we will when we finish this lead code code we'll be doing this when we reach the kubernetes part if you are interested to learn about this right now you can go to YouTube and search for my replate video where I discussed this but short answer is you don't start ec2 machines you have a cluster this is an Autos scaling cluster which will have you know whatever 50 to 100 machines based on current load of on kubernetes and you can go from here how often would data be cached in redes is there a possibility of data could be stale if yes why not use materialized use which is either updated via a Cron job or whenever the under table is updated why introduce a new Cog in the infrastructure which is also expensive what do you mean by expensive firstly materialized views sounds super uh expensive in this case you're saying create a materialized view anytime data is updated data is updated very often if your you know let's say this is your table and you know a row changed over here you are saying recreate the materialized view versus if you know this is your redish cache and you know a data changed you are just changing a single key over here in the red sorry you have data changed you're changing a single key over here versus here you'll have to recreate the table and what was your other thing C in infrastructure you can also say why do you need any of this just do it in memory why do you even need materialized views this is your node just process do it in memory in fact that is exactly what we do on appex.com this tracks that you see these contents that you see you don't see it right now but you get the idea our cash in memory so yeah you don't need it I'm saying it's the most popularly chosen thing your cash in red is why is it popularly chosen God knows I joined company one they were doing it I joined company two they were also doing it I realize can't this is just a pattern that happens how expensive is redis you can start I think I could be wrong on this let me Google this but I think redis runs on single core which means you can run it on a yeah it runs itself is single threaded which means you can run it on a very cheap machine and it is it's sces very well to like a million request per second okay two more minutes when you say workers are these Linux machines what H these are Linux machines say they are Docker containers um like eventually when we reach kubernetes if you're really building lead code you probably want them to be Docker containers but they can also be easy to machines yeah they're VMS red is no longer op for the free yeah there's some controversy that's happening I haven't looked at it but yeah something is happening there um make a p SG for so how to get assigned the issue just contribute create a dude this is one project you can don't spam with read me but you know try it out I will close it worst case don't feel bad how two different backends access same redis if it's in oh it's in memory of a different machine good question this is ec2 machine 1 which is running back end one and then ec2 Machine 2 which is also running the same back end you have Auto scaled backends basically and then this is ec2 machine 3 where in memory redis is running here in memory redis is running it's not running in memory on this machine or this machine it's running in memory on this machine that is why you know these guys can access it okay one more minute so you're using the AR this architecture only when building lead code on YouTube uh we were never able to reach building lead code architecture on YouTube we are now building it here on projects Onex steps.com this is the architecture that will be followed where is Q stored again on a ec2 machine everything is on you know a VM Q is nothing but you know in the end it's all redis running which lets you do Q's or if you're using you know something else you're using rabbit mq it's just a process running on a machine that is maintaining a queue for you it's all running on an ec2 machine usually it'll be on a separate ec2 machine if this is your backend ec2 machine you'll have you know Rabbit mq running on a different machine size of QBE matters in what what does inmemory data mean how does postgress save data inmemory data means okay the process that is running if you are running a nodejs process if there's a variable a over here that variable is in memory it means it's currently in Ram if you remember the architecture sort of in CPU right now not CPU it's in Ram like it's in random access memory which is you know very frequently and easily accessed very expensive you only have 8 GB Ram you have 1 TB SSD why because this is very cheap but this is very hard to access if you store data here it takes time to fetch that data if data is stored in Ram you get it immediately what is the downside downside is um or you know sorry your other question was where does post store data it stores data and disk yeah unless I'm very mistaken it does store data and disk and whenever a request comes it'll get that data not from Ram but it'll get it from you know your SSD somewhere in that folder if you remember how did we start mongodb we said Docker run I don't have it right now but you know if you remember we ran Docker we had a volume / data/ DB mounts to something this/ data/ DB is where dumps all of its data it's stored on the file system what say red is it stored in memory okay I think we're at 802 now so I should start [Music] um snapshotting thing again how data is in yeah just Google read but like high level is forget up and only file let's keep it very simple you know you just take a snapshot of the database every few minutes you dump the database every 900 seconds or let's say 60 seconds or every 300 seconds by dump data backup data backup data if there is something happens in this 60 seconds then you're screwed that data gets lost if you don't have aend only Lo log then you can just have a snapshot every few seconds every 60 seconds every 30 seconds and if there is some data that gets if it sort of break broke between two 60c intervals then this much data will be lost but that is fine you it's not a lot of data you still have a snapshot from very recent snapshot uh but if you want you know literally no data gets missed then you need a upend only file log you need a log of every event that happened cool all right let's do a poll again I think I've answered a few questions hopefully things feel better now but either way I think we will proceed and write some code next because it's 8 already um and you know code writing is fairly straightforward all right 92% is great let's move on to writing some code now first thing you have to do is start redis this is similar to starting postest this is similar to starting mongod DB on your machine you can either go to redis you know documentation and install redis on Mac OS you can totally do this you can do a brew install redis same for Windows or whatever or we have learned this many times now you can run Docker using sorry you can run redis using Docker this is usually the standard port on which redus runs which is why the port mapping is from 6379 to 6379 run it in detached mode you don't have to give it a name feel free to give it a name but what are we running we are running redis let's copy this let's open a terminal let me kill everything I have currently and let me rerun Docker by running this command now it will work for some of you might not for some of you depending on you know your Docker configuration it might take a long time for some of you because you might be fetching the redice image so you know either way try to run this and for me I will have to change the name because because there is one that already exists with this name Docker run Das Das name give it a random name you don't have to give it a name you can remove this completely in detached mode which means run it in the background with this port mapping what are we running we're running the redish image if I do a Docker PS now it tells meat on your machine Port 6739 is serving redis to the port 67 6379 on the docker container this is starting redis what does redis let you do if you remember from the beginning redis is an open source in memory data structure store used as a database cache and message broker we understood it is used as a cache so it can be used as a database I shared there is a video that you can go through if you want to understand how it how you can use it as a primary database as well but message broker is interesting what does me message broker mean it means the two things we discussed initially messaging cues as well as Pub Subs redish lets you run both of them that is what we'll focus on today we'll understand how you can use it as a database or Cache as well but we'll focus on messaging cues after you run this command you have to run one more command see we have started redis if you run the First Command which is Docker run you know redis with this port mapping in detached mode after you run this command on your machine if this is your you know Mac machine then you have redis running in such a way here any request that's come to 63 whatever the right Port is this might be the wrong Port is being forwarded to a Docker container that is running redis on you know this the port here is also 6739 you get the idea the problem is how do I talk to this let's say I started very similarly I started postgress if I started postgress on my machine very similar using Docker how would I talk to it how would I send an insert request there how would I send a delete request there I would create a nodejs process I would use the PG library or I would use Prisma and I would give you know do postgress crl to it and it would be able to send a create a delete an update so on and so forth this is how I would do it if I was using Prisma sorry postgress do you remember psql I shared you can use the psql CLI command line interface to also send an insert query K by you know insert into users something or you know delete from users so on and so forth psql was a if you remember command line interface what is a command line interface something you can run here in your terminal if I run psql here gives me something connection is not correct because I haven't given it the right username password to my postgress database I don't even have a post database running but it is nice to have a CLI that talks to post so I can directly run commands in the terminal I don't have to start a nodejs process I can we will of course eventually start a nodejs process but before we get there let's try to interact with redis via CLI via a command line interface hirat so you are telling me I need to install something on my machine now you will tell me to install some redis - CLI very similar to psql yes ideally you would have had to install this but thankfully whenever you start redish in a container this already has redish CI inside this already has redish CLI inside you can just go inside the container and run redis Das CLI and talk to redis you don't have to install it on your Mac machine can you install it on your Mac machine yes you can you can install the redis CL on your Mac machine but why should you it already is in this container why not go inside this container and run redish CI directly there harir how can I do that if you remember you can go inside a container by running the docker exec dasit which means interactively container ID /bin/bash if you run this command you'll be able to go inside your container I have a ER running I want to go inside what does going inside mean it means I want to run bash commands nothing else it doesn't mean I'm not going inside the docker container I'm just running bash commands on that Docker container how can I do it Docker first run Docker PS to get the ID of your container then run Docker exec dasit which means interactively the container ID /bin/bash run this command to go inside the container and then run red- CLI inside the container once you reach the container run redis Das CLI reach this point and then we will proceed what have I done started the container by using what command I used this command to start the container then I used this command to go inside the container then i r redish c already exists in this container and I'm seeing something like this now I can send it commands if this was postgress I could have written select tar from something this isn't postgress so of course SQL won't work over here something else will but have you reached this point take two more minutes try to run it locally then connect to it locally using reddish CLI if you want to go down the fancy route you can also you know install it directly on machine on your machine uh you know you can download it from source code you can build it from source code you can do this basically if you want to you know if if this doesn't work for you you can also download it from here unzip it and then you will find redis server and you know or you also have to make it you need to have C++ on your machine which is why you know this is just easier um you can still follow this I I'm happy to post this if you want to directly install it on your machine but it's just easier if you use you know a Docker container how many of you are able to run this much locally how many of you are able to interact with your redis instance that you just started oh well most of you are so great it was 90 then reach 84 85 is I think that's decent cool so now the question comes let's make redis you know do some magic what what did redis say it can do it can store data and it can act as a message broker let's start by storing data in it the way to store data in redice is set give it a key for example you know you user and then give it a value I don't know what what is happening here value could be you know harat yesh something is happening here but if I press enter this is the command that I ran set User hirat it's like storing a key value pair I just stored a key value pair over there here the value of user is herat if you were using or you know storing uh using redis as a cache I would have stored something like this set tracks of you know whatever tracks exist on Project Sonex sts.com so you know unfortunately you can only store a string over here you cannot store an object but you can always store an object as a string right you can always stringify it and store it so would have stored something like this K why it is an array the first one of which is you know types script so title typescript comma description whatever this description is so on and so forth this is how I would use redis of course this set would happen via a nodejs process this wouldn't happen in a terminal but this is how I would cache data if I was using it as a cache if I was using redis as a cache I can set data like this how can I get data can you guess get tracks it gives me yeah these are the current tracks that you have and as you can see it was pretty Snappy happened pretty quickly I can also delete tracks and now if I try to get tracks it says nil which means nothing got returned that's a brief of setting data probably the simplest thing in red is it lets you set get and delete data very similar to a database so nothing new over here only you know this isn't SQL there are no tables there is no structure to your data that if you you want to read through this video is exactly that it's this video is what I called over engineering okay you know just because red is can be used as a database does not mean you should I think this video is cical also from what I remember I could be wrong but you know they say if you really want to use redis as a database you can but you will see how you know complicated it gets um so feel free to look at this video one more thing I will quickly introduce which if you don't understand it is fine sets and gets are good enough is hash set and hash get if you want to store you know multiple things for a single key okay by user colon hundreds name is Jordon or sorry jondo email is this thing and age is this thing if you want a more SQL like structure okay user with ID 100 has you know these things if you want to associate multiple data or like multiple things with the same key you can use H set which means hash set set on the hash the user with ID 100 has this name this email this age in the future if they have an address this address so on and so forth here you can set just one thing on a key okay by for tracks it makes sense if we have a very long AR of tracks that we cash for projects. 100ex steps.com so you know it's fine you know we don't need Ed set but if you want to use it as a database you will need at set if I copy this and paste it you will see it says integer three it says I've set three things I've set the name email and age now if I do a HG user colon 100 name it says joho address sorry address doesn't exist email so on and so forth so hashed get basically means create a hash map for this key create an object for this key name is jondo email is something age is something you can add more things so H set user colon 100 you can be like address or you know City delhi now it has four keys I can do a EDG user 100 address so on and so forth so if you want to store complex data you use edet if you want to store simple data which you know in for example you know caching could also be in a way okay this website is built in a way okay the people who have paid see some tracks the people who have paid for DSA see DSA tracks so you know it might be user specific in that case you know then also you don't need set say then also you can you know do something like this okay set tracks for the user with ID 100 so you can do set tracks colon 100 as this and then whenever user with ID 100 comes you can do this get tracks 100 because it might happen you know this website becomes user specific the tracks become user specific who have paid will see something not paid will see something else so on and so forth so that's gets and sets not really needed today but you know a thing you have to introduce if you are doing uh redis do we understand get sets we don't we haven't done it using node just yet but do we understand it you know using the CLI we do very simple then the next organic step is how to do it using node J oh a very easy to do it using nodejs um yeah we'll see it eventually but let's you know not do it using nodejs let's do Q's using both CLI and node Jess because if you understand how to do Q via node Jess you can very easily understand how you can do sets and gets via nodejs as well so let's understand redis as a queue let's build a system like this okay a browser sends a request to a primary backend backend puts it in a queue workers pick it from a queue so we have to write the code to number one this code right here a primary backend in Express number two we have to write worker code that's it we can send the request from Postman we don't need to write any client side code but we have to write two node J servers a primary back end that pushes to a redis q a red uh sorry a nodejs worker that pulls from the redis Q and you know processes the thing R is has a que you can also push to a top or Q on redis and other processes can pop from it that is what we want right we want one process to push other processes to pop good example of this is lead code submissions that we need processed asynchronously let's understand how you can push and pop using CLI once you understand this in the next slide we will actually build this and you know write some express code and you know not J Code over here the way to push to a q in redis is this here are the Q called problems I going to push let's say one over here of course you wouldn't push one when the user comes and you know gives you all of these things you will most probably put all of this data in the queue because you want the final worker to know problem ID one code this has been submitted by this user so on and so forth so it won't be as simple as this but just to start let's understand what L push means what does l mean over here push I understand what is L it means push from the left side as you push more and more things they'll get put here on the left side we should use R push I want first thing to be pushed here then second thing to be pushed here not it doesn't matter the main point is if you're pushing from one side you need to pop from the other side if you're pushing from the left make sure you pop from the right yeah otherwise it will become a stack so if you do a l push make sure you're doing a r pop if you're doing an L push and you do an l pop you have a stack you don't have a Q because you know if I push something to the if this is my q and if I'm pushing from the left first thing I pushed is one and then second thing I pushed from the left is two and the third thing I pushed from the left is three where should I pop from should I do a l pop no I should do a r Pop I want one to be picked up first then two and then three so this always gets confusing if you're do if you want to create a q if you're pushing by L pushing from the left pop from the right if you're pushing from the right pop from the left and if you want to create a stack then L push and then L Pop Let's see this in action forget the database forget setting data it's a different that doesn't conflict with this this is qes push two problems one one is a very weird you know payload to put over here what should we idly put something like this like a Json object that says you know problem ID one comma you know uh user ID 2 comma so on and so forth a bunch of data here this is what you should ideally push even though this looks like an object it's actually a string the node just process will be able to parse it as Json later on but reddish doesn't support Json you have to store a string so we have you know json. stringified string I have pushed twice how can I pop now what will pop in the real world the worker will pop but you know just to try out here if I do an L pop sorry L push R Pop problems what am I missing oh sorry I exited out of the CLI R poop problems I get one first one was the first thing I pushed one is the first thing I popped very similar to what I showed here in Excel draw if one was the first thing that got pushed one was the first thing that should have gotten popped what was the second thing I pushed this thing right here if I do an R Pop problems again I see the problem IDs over here which mean and if I do it again what happens it says nil the Q is empty currently this is how you push and pop do we understand push and pop via CLI if we do then you know we can move on to the node J Code there's one more thing called BL push sorry BL pop but if you understand this only then we should move to that we do understand this the last last thing that's left is blocking pop a lot of times you want K I want to wait for when something comes here I want to block the thread until something comes to the problems SK so let me open another terminal here Docker PS and then you know Docker exec dasit /bin/bash and here if I do select oh what am I doing red is - CLI I have two redis clients I will produce from this one I will subscribe on this one I will do a blp problems I repeat not lpop blocking L poop sorry has to be rpop my bad blocking rpop problems and from here I will do a l push one let's say or problems one if I press enter in the left terminal oh sorry I also have to give a third argument here I will share what this third argument is for now let's just select zero you will see it is blocked it doesn't let me do anything it is saying bro I am blocked until someone pushes on this Cube and if I push on this queue you will see the blocking went away so what does Br pop problem zero do it says bro I am going to be blogged for an infinite time that was zero that's what zero means okay I will be blocked for an infinite time until someone comes and pushes something until then I will just wait what if I make this five it means I'm going to wait for 5 seconds 1 2 3 4 5 nothing happened it exited said bro sorry nothing came 5 Seconds passed so usually you do zero over here we will do zero in our final process and you know whenever something gets pushed the blocking goes away you're able to get something here cool that is what BL pop is or you know BR pop is hope hoping it's straightforward um and you know not going to do a poll now let's move on to what let's move on to implementing this first part of lead code in nodejs if we implement this I think we're good for today I don't think we'll be able to do more what's left is Pub Subs I'll cover in an offline video we'll cover Pub subs and then you know the code of node J in PBS basically the second part over here what we're covering today in the live class is this this code what we'll cover in the offline video is this code right here even though a lot of this sorry not this code this code a lot of this second websocket part was left as an assignment so I had to either way do it in an offline video but let's first you know talk to redis Via nodejs let's do the same interactions we did Via CLI via nodejs very simple you use the redis Library what is the reddish Library what is the express Library let's you create HTTP servers what is the axos library let's you talk to an HTTP endpoint what is the PG Library let's you talk to a postgress database so what is the reddis library let's you talk to a redis database let's write the code just for this primary back end that pushes onto a queue let's not worry about the worker code today I'll just create an offline video for it soon okay even though it's like right here so you can like feel free to copy it all right so ideally you should create two folders let's just do it let's just do both of them let's write the primary p and let's write the worker code and you know let's try to publish here subscribe or you know push to the queue pop from the queue and see if it works or not create an empty note empty folder and inside this create two another folders I repeat single empty node just folder inside which you create two folders one called Express server one called worker let me do a poll um because you this is very simple code do you want me to just I already have the code locally should I just explain the code yes or no if you select no then we will write the code together but should I just explain the code yes or no if you select no then I will write the code from scratch with you yes I'm inan very close competition here it's almost a 50/50 um so take we can write it it's it was a 50/50 okay create an empty folder inside which you create two other empty folders I will do it fast though so what did I say let me exit out from everywhere you can exit out from the CLI you you don't need it anymore we will be now interacting via the node chest process create an empty folder call it week 19 you know lead code clone or whatever you want to call it and here create two folders call one worker call one Express backend open this in Visual Studio code you can call them like something else as well the the name doesn't really matter all right going to give you guys maybe 15 seconds create two empty folders nothing too fancy very simple stuff two empty folders that you know have don't have anything open it in Visual Studio code 10 more seconds then I'll proceed sorry one more second all right next up what do you do you initialize empty package.json TSC config.js done many times not going to repeat it I'm going to go to the first folder called Express backend and run these two commands npm init Dy npx TSC D- init I'm going to do go to the other folder run the same two commands npm init Dy npx TSC D- init if this doesn't work for you you can copy over TSC config.js from you know a random project on 100ex devs code to GitHub you'll find a typescript project just copy over TS config from there if npx TSC D- in doesn't work for you what is the next thing we do we go to TS config we change the root D to Source SRC we change the out to dist done this many times hoping it's straightforward going to wait for you know 10 more seconds what did I do these were the two commands I ran in both the folders now both the folders have package.json and TSC config.js and I Chang the root and out there here also what's the next step create Source / index. TSN both the places source index.ts source index.ts all the steps are present here as well do these two things and then you know maybe ignore this and this for now just create a source / index.ts in both the folders going to wait for 20 seconds and then I will proceed this is what my folder structure looks like right now yours should look the same as well if you want feel free to you know install these dependencies as well make sure you're installing the right dependencies in the right folder Express server only this one needs Express this guy does not need Express so make sure only here you're installing Express five more seconds all right now let's install the dependencies the worker needs a single dependency what does the worker need to do it just needs to pull something from redis and you know do a fake processing on it we are not writing the code to actually run the end users code that will come once we reach kubernetes we're not writing the code to actually run the users end users code we are just writing the code to pick and then maybe you know run a fake timer of 1 second processing code and then sending backer accepted to the user so this only needs one dependency the redis client so in the worker all you have to do is npm install redis but the primary back end needs to do two things expose an HTTP endpoint which means it needs Express and also needs to push to a que which means it needs redis so on the express back end npm install Express at types / Express and ready is cool now comes the part to write code this guy needs to have a single end point you can also copy this code I'm going to write it from scratch import Express from Express and import create client from reddis this create client lets you well as the name suggests const client equal to create a redis client and then you know on this client you can call client dot what was it l push or L poop so it's pretty much that easy um you just have to create a client there's one thing I'm missing you also have to do a client. connect over here ideally you should await this as well you cannot await on you know the global thread over here so if you look at the final code over here that's a little more cleaner if you look at the final code over here we have a start server function inside which we first await client. connect make sure the redis client has actually connected and then we listen on the Express Port so this is the right way to start the express server first make sure you are actually connected to redis otherwise someone might send a request and you might not be connected to redis itself so you know it'll become a problem so this is the right way to do it I'm just going to you know wing it right now and you know hope the client. connect happens before a request comes from the user on the app. poost submit let me write the rest of the code const app equal to express app.use express. Json and then app.get or whatever sorry post SLS submit const you know whatever problem ID user ID and code maybe language you know all these things the user will send me direct. body and what do I have to do ideally I should push this to a database so I should do you know Prisma do submission. create we don't have any Prisma schema yet which is why I'm not doing it but ideally you should store it in a database as well and what ises do we have to do client do L push submissions client. lpush to the submissions cue what is the first argument mean here the first argument is I close that the first argument is you know whenever we are doing a push if you remember from a few seconds ago whenever we were doing a push I did a L push problems and then I gave it you know some object ID one so on and so forth so this problems the name of the queue is the first sort of argument the second argument is all the payload I've taken all the payload from the user dumped it over here that is all that you have to do rest. Json message submission received this gu is publishing to the queue we haven't written any worker code no one is consuming from the queue but at least you know publishing side is done how can I confirm if it is working and this code as you can see is less than 18 lines of code if you look at this code it's slightly more because it has you know the start server logic to make sure the client connects first this also looks very fancy the way I've extracted it but you know it's pretty much very similar code the way to make sure it works is I will do a TSC DB to actually compile my typescript code into JavaScript you can also copy this code paste it here and compile it because both the codes do the same thing and then node dis index.js to actually start the express server how is it starting considering I haven't done a app. listen app. listen um 3,000 yeah I forgot that TSC DB yes now I'm listening on Port 3,000 let me try to send a slash sub MIT via Postman you should send uh you know whatever if you've copied this then submit is the same but the arguments are a little different so make sure you send the right arguments they're actually pretty similar only all right local th/ submit yeah pretty similarish arguments I think user ID is one thing that's missing here send and it says submission received which means what control reached here which means what most probably this has succeeded this is an asynchronous operation you should ideally await it or you know this will probably be a have a call back at the end I'm sure there's a way to you know await it I'll push type of what is this putting by default uh see do we have something here a wait you can await it right oh okay I was not able to AIT it because async there you go so you ideally should AIT it I mean if you don't await it then it might happen nothing reached the queue you still send the user you know this thing so you should ideally await it you should ideally ideally put this in a try catch only if it succeeded should you send the user this else you should send the user submission failed for whatever reason failed so you know you can make it as fancy as you want or you know what catch but yeah if I restart it compile no Dees go to postman and send data it says submission received question is is it actually pushing something to the submission Q how can I check how can I check okay anytime anytime the endp point is being hit by Postman something is reaching the queue how can I confirm this I'll just go to my terminal do a Docker exec do a red- CLI do a r pop submissions because submissions was the key of my when I was doing an L push where was I pushing to submission so where am I popping from R Pop submissions and I see problem ID one user ID one code something language something pop again I see it again if I do a Let Me Clear the queue now let me do a br pop okay I will remain blogged until something comes over here BR pop submissions zero and as you can see it is blocked and it will remain blocked until I send a request from Postman so it is blogged right now as you can see this is you know not exit out if I send the request you will see the submission came over here so we are able to populate the redis Q correctly we have confirmed this the worker logic is pending but do we understand this part sending data to the redis Q I'm assuming it's pretty straightforward actually great so let's write the worker code worker code it's as simple as going to worker doing an npm install of uh you know redis opening worker index.ts and what does this guy need to do it also needs to create a client from reddis that is the first thing this guy also needs to do but after it has created the client and after it has connected what does does it have to do it needs to infinitely keep hitting you know keep doing something again and again again and again and again uh until pretty much Infinity until you stop the process it needs to keep blocking and waiting for other people what do these workers do when you start these workers they are just pulling the queue do you have something do you have something do you have something you have something give it to me I will run it and then you know whatever do things afterwards send it to a pub sub but you know they infinitely to keep pulling the queue which is why you can put a while one here infinite for Loop doesn't matter um and do you know here you should you probably want to await the client. connect because you know the V Loop will initially immediately do a you know client. rpop um submissions and you don't want that I think I can just await this as well cons response equal to await client. ROP this I just need to put it in a async function because I'm doing a bunch of await calls over here and I can do this here and I can call it main now I've done a few things here nothing too complicated I just called the main function which is asynchronous inside which I first awaited by client. connect let me first client connect to the redis client once I do then I'm just getting the response then I can you know here I will actually run the users code we don't have any of that logic yet we haven't we not writing the logic even though it's very similar you can very simple you can do a Docker exec over here you know if you want to build a janky version of it just run the users code in a Docker container we're not going to do it but yeah we'll just do a you know await new promise resolve set time out what does this line do you have introduced a very weird line it just you know awaits for one second how hirat what does it say it says await a new promise which will be resolved after 1 second a very complicated syntax put it in chat GPD it will tell you how it does it but this is similar to simulating an expensive operation similar to awaiting for something that's taking more than 1 second awaiting on it not blocking on it your thread isn't blocked here still do other things there is nothing else in this application it's just doing this so you know it'll just await but you get the idea we're just simulating running the users code and then you know here we should actually ideally send it to the pub sub we'll cover that in the offline video then you know we can log uh processed users submission that should be pretty much it the final code if you look at it in the slides now start worker does a client. connect logs worker connected to redis while true gets it from the submissions CU oh I need to pass in another argument here the timeout so submissions comma Zer a client. br pop yeah let's do a br Pop Let's Wait For You know let's keep waiting here until you actually get back a response if we don't do a br pop you know this response will be null from time to time and you know we'll just keep running this Loop versus let's just AIT let's just do a blocking BR pop right pop but stay blocked over here you get something um and then you know process submission which here you know yeah just does the same thing extracts the data from the yeah you can probably also you know console.log response to actually see the thing that was put in the queue the thing that was actually put in the queue let's log that over here to see what it looks like cool let's compile the worker code let's run it twice I repeat I'm going to run the worker maybe more than twice let me run it four times because in the real world you will have multiple workers because your website could have 100 users so let me open the terminal let me split it into four windows and let me go to this worker folder and run four workers CD node dis index.js run the same thing here no this index.js I'm starting four workers on my machine and now they are all four of them are waiting okay when will something come in the queue let me put something in the queue how can I do that I will start the express backend as well node index.js and I will start sending Postman requests so let me put this on the left the four workers let me put postman on the right let me start start sending requests as you can see they are getting picked up by random workers as time goes by and they processing all the submissions what if all the workers are down what if all the workers are down it is fine they will keep piling up in a queue and as you can see more and more people are sending submissions it's fine they're waiting your workers are down but you can always bring back up workers and they'll start processing the code again so that is the benefit of you know such a asynchronous system if the workers are ever down you can just bring them back up they'll start pulling from the queue and then you know I can keep keep resending these guys can keep repicking that is how you build the first part of what we're building today which was this architecture right here what is left to be built this architecture right here I will cover in an offline video both what is a pubsub and how a worker can publish to a pub sub to a websocket server and you know pull it from that all right I think that will be it guys um I will do a poll did you guys understand everything and then we will open it up for questions great all right going to open chat feel free to drop off the class has officially ended thank you everyone for joining and if you have no questions I'll see you guys maybe next week because next week might be a holiday we'll do a poll and if people want a break before 100 we'll do a holiday else else we'll continue all right questions questions questions questions questions there you go let me oh my God so my question stop okay recap please let me do a quick recap won't do very deep recap because you know um 92 93% understood I'm assuming that means it's pretty simple um yeah what we learned today was cu's publisher subscribers and redis why did we introduce redis because reddis lets you create cues and red is also lets you create Pub Subs question is what are qes and what are pups UPS Q's as the name suggests is a q where one nodejs or goang or rust process can push something to a que and another again nodejs goang doesn't matter any process can pick things from here this message Q architecture is very popular when you're building a system like lead code hirat why is it popular in lead code because in lead code you letting the end users run an expensive operation on your compute what is compute your machines you're letting the end user run a C++ code on your worker when you do that you want to make sure you don't run it on your primary back end you also want to make sure I only have limited workers three four workers if a lot of people send me data I can't simply tell the workers do this do this do this do this do this I can only push it to the queue and hold hope the workers will Auto scale eventually you know if the queue is very long they will become 20 workers and pick up and start to process them that is why one use case of using message cues what is the other thing we learned Pub Subs what if a worker that has processed the users's code what if they want to send the end user okay by there you go your submission was accepted or rejected how can they do that the worker cannot directly connect to the browser the worker is very protected somewhere you know when we run this I might be you know buying three machines and running the workers there why would I rent a worker on AWS when I know okay you know it needs to run all day and you three are good enough so this thing can't directly connect to the browser is the point these might be running in someone's home these might not necessarily be running on any AWS machine so what these guys can do is they can publish to a pubsub which is running on the internet running on ec2 by please tell user ID 1 if something has happened and the websocket server that is connected to user one in case the user is on the website if the user is not on the website great but if the user is on the website or more specifically if the user is waiting on this page after submitting K what was my response was it accepted or rejected I'm waiting then the user has a persistent connection to a websocket server the worker can tell the websocket server via a pubsub yeah user ID oneco send this thing so that is the general architecture that we learned today we just learned a little bit of implementation of this as well can we Implement redis push in nextjs as well want to optimize my API project you can as long as you are doing it on the server just don't do it in a client component it will not work but if you're doing a Ser component it'll be fine is redis Q equal to messaging Q that is a good question I'm being very cautious what is a messaging Q versus Q you have being very cautious of saying the word q and messaging Q message Q versus message broker I would assume cues and message cues can be used interchangeably message cues and message Brokers are different things but I think cues and messaging cues can be used interchangeably could I be wrong yes so please Google this don't take my word on it all workers are taking all the items in the queue right only one should take it right no not everyone is not taking every only one is taking one item if you look at the code if one workel will pop then BR pop the other will pop a different one let me open the terminal I have three running right I have three workers running over here let me open all three let me restart the three workers I have three workers running a you even better let's run two workers I have two workers running if I go to postman and send only one picked it up both of them did not pick it up if I send again only one picked it up both of them did not pick it up so the code makes sure only one is picking it up at one time does this worker try to pick up the work yeah so it's similar to this okay few people are it's similar to if I open two Terminals and in both of them I run you know BR pop submissions and then BR pop submissions then you know whichever one got there first will be able to snag the item from the queue and the other one will still have to wait so you know that's what's happening they're not both going to get the same item topic questions on notion does m answer that if Autos scale the fourth worker the third okay A lot of people are asking the same question the answer is red is make sure only one item is able to or one worker is able to pop as long as one pops it the other one will not get the same thing um abishek if I answer that answer that all work answer that can we Implement answer that okay same questions again uh worker code repeat I will go through worker code one more time break please but break are there limits of redis cache to store data there should be I'm sure there is a limit redis H set limits redis can handle up to 2 to the^ 32 keys and was tested in practice to handle at least 250 million Keys per instance every hash list can store up to these many elements so your answer is right there you should learn to Google by the way most things you ask me when I don't know the answer I will Google in front of you can we implement this red okay answer that answer that answer that worker code repeat I will repeat worker code let me answer all the questions and I will do it um trying to answer questions only related to the class why red is Q like why can't we create Q the way C++ DSA problems good question question by is asking okay harat why have you done this very bad we come from the DSA World why are you doing this this is an ec2 machine where you are running a nodejs process and this is pushing to a q in a different dc2 machine or maybe the same one doesn't matter but you know a different process it is pushing to why do you not create a q like we do in a C++ process the Q in a C++ process is in memory it is a variable variable in your nodejs process where Q equal to mtq how will another nodejs process that's running somewhere place else get access to this queue it cannot it's in memory over here I mean it can technically but it can't a different machine though it cannot that is why you have a second process whose job is to maintain a queue so that this guy can pull things from here if you put it in memory like you do in node in C++ when you're doing algorithmic problems other processes don't have access to it when you're solving C++ problems you have a single process when you're doing a lead code problem you have a single C++ process right you don't need another C++ worker process to get access to this que but here we do so you cannot store it in memory you store it someplace else more specifically in you know a messaging queue that hopefully answers your question um worker code repeat a lot of people were saying or one person was saying again and again I don't know one of the two so let me do that uh let me repeat the worker code one more time as you can see thankfully not a lot of lines of code we create the client actually okay I get why it might be jaring let me see if I can make it slightly [Music] better I cannot can I I can make it better like this there you go this is as good as it gets I just call a main function and inside that I run all of this logic harat why why can't you do this I cannot do this because you cannot have an await at the top level and hence I just copied all of this logic put it in a main function and then called the main function over here I create a client a redis connection I connect to it and then I infinitely run a loop okay by is there any response and as soon as I get a response I just log it for now eventually I should you actually execute it I wait for one second if you don't understand this but remove this doesn't matter eventually you'll have some you know Docker orchestration here and then you log something on screen so that is the worker code it's actually pretty straightforward um chat went away somewhere okay let me open chat one more time guys oh there were a few notion docs also I will look at them a very soon but any more questions okay lot of questions but if you integrate red with Docker volume then will the data be lost if something went down no if you integrate with lest volumes as long as you're as long as you're mounting the place where you know the aend only log is stored you'll be fine you can most probably like 99% you can persist data U if you have the volume mount to the right place what do I mean by the right place the place that has a snapshots stored as long as your volume M it there you'll be fine how does how be makees sure that only one worker will pop the task do you remember when I said redis is single threaded what does that mean that means wherever a redis is running if this is my Mac machine where I'm running redis currently on you know inside Docker this is running on a single core so if two people do a V pop by I also want to do I also want to do redes can only process one at a time it is not running on multiple course it will first try to process this and if it return this guy something second guy will not get it it is not it cannot parallely handle both of these requests it runs on a single thread which is why oh am I sharing my screen I am not I was saying if this is my Mac machine and I'm running redis over here and there are two people who have done BR pop then both of these people redis cannot process both of these request together it's probably catering to this and then catering to this and then catering to this flipping between them and then when it gets some item if something gets pushed it'll just send it to one of them it cannot parall handle both the requests it is running on a single thread run sequence red ISS on a single thread it will be slow slow no thankfully even though red is stands on a single thread it is incredibly fast as fast as it can get so you know that's never a problem my chat went away I have to keep stop sharing in okay um chat app project looks pretty good what will happen after we pop our worker goes down yeah that's that's a problem that's a good question to there you do acknowledgements a lot of time that is a very good question the question is I miss this k harat what if you pop and then this goes down you were not able to process you popped it from the queue the queue became empty and then you weren't able to you know um actually solve the problem or you know actually process the user submission because the nodejs process went out you can add an acknowledgement to a queue I'm pretty sure that red is let you red is qac Pub sub Act neack there is a way to do this I think there's a way like in reddish to do this by default if not you can implement it yourself as well but you know okay here's how you can implement it from first principles even though I think there is a way to do it in redis I'm happy to you know Google after the class and and let you know this is your que right hm what can you do I mean I have a few ideas in my head but all of them will have one case where it will break basically if this question was asked to in a system design interview I would have failed this is your q and in this Q the one thing is you know you can have whatever queue you have if there is a nodejs process that is picking from that queue it can be like bro you I will take something I will take something from this queue but if I if I don't respond back in the next 5 seconds I repeat if I don't tell you back in the next 5 seconds can I am done your process is done then please push this again to the queue I repeat if you can have some sort of acknowledgement on every item okay whoever is picking it it's their responsibility to acknowledge back in 5 seconds either the submission was done or if they don't acknowledge back then the Q can assume okay failed and add it back to the que or make sure mark this as not processed so that someone else can pick it question is how can you do it I thought you can easily do it you know some argument BR pop will accept like while while you're popping you know in the end you'll do a br act not exactly this but you know have a faint memory that you can do this in red is does someone already know this I'm sure someone in chat already knows this um no one knows this cool no one knows this um I'll answer this question I will forget this question but I will still but about that um but was it um how to do X in Q's in redus all right cool that was a good question good uh let me look at the notion docs not loading loading why web sockets can be both be in both asynchronous and synchronous depends on your definition of asynchronous and synchronous if your definition of asynchronous is here I will send and the other person might pick the thing in their own time then it is not asynchronous when you send data bya websockets you send it immediately so that is what makes it synchronous but what makes it asynchronous if you say the definition of asynchronous is whenever I am sending data I don't worry about an acknowledgement whenever I'm sending data over websockets I don't worry by the other side will send me something back then it is asynchronous so the question really gets down to you know what is the definition of asynchronous and the answer is K up sockets you know at least I think if you read online everywhere it'll tell you it's asynchronous because you don't worry about the response send it to the other side other side takes care of it but I personally you know was arguing it's probably synchronous but I think if someone asks you in an interview say synchronous suppose we clear redis and the update clear redis and then update postgress what if backend set sends a request to redis and redis gets refreshed before the put operation in post suppose we clear redis and then update postgress okay what if backend sends request to redis and redis gets refreshed before put operation in P refreshed what does refreshed mean do you mean it gets cleared because if it gets cleared you are fine redis is cleared redis has no data and you think if the backend request came to update something and you sent a request to red redis but you know in this process before the update happened here red is refreshed what does refreshing mean if refreshing means the data got cleared here then you're fine because whenever the next get request comes it will not find it in redus and it will find the latest data from post if you mean when you say refreshed it went down then the answer for you is I told you whenever you're hitting the back end you will first try to clear redis if this fails you will tell the end user your request has failed you cannot add a track only when the first clear happens then update happens then you tell the user you are good to go if only clear happens it's fine clear red has got cleared in the next put requ get request it'll get filled the problem happens when you you know put data in redis and put data here when you put data in both the places is when if one of them fails you'll have a problem hopefully that's clear how does expressjs backend server is pushing to redis Q we haven't provided any URL oh yeah good question yeah you can provide yeah that's a good question uh like you can provide you know redis URL somewhere here uh create client you know there you go something like this in our case it just picks the default URL and you know connects to Local Host but you if you ever get you know a cloud redus URL if you go to console. av.io you can you know buy a free or you get a free redis create service create redis there you go you get one for free then you'll have that URL for example I have this URL let me copy it then I can just you know paste it there's a way to just pass that argument here and then you're good to go might not be this might be you know something like more complex yeah username whatever my username over here is let me preval it you know default and you get the idea you can you know pass all of these here and then you know that is how your client will connect to this Cloud red is instance good question um what if we run container 1 and bind it to the same volume for project one and run container 2 with the same volume config for project 3 what if I start one container B to the same volume path for project one okay and run container 2 the same volume config for project 3 do you mean what if I start two containers mount it to the mount the data directory there to the same volume if the question is that the answer is they will just remain in sync if you the first container you put data it will appear in the second container so on and so forth before start teaching just cross check layout it always gets fixed when you res shared makes sense please do share web3 code after want to join but can't cop up both s we will see I'm unsure uh when that will happen but yeah we'll see all right cool I refresh the page don't see any changes uh oh no I see more changes oh some I use Postman request admission why can't see it in red is CLI uh it just mean you are doing a problems here you're popping problems are you sure you're pushing to problems as well when you do a l push are you pushing to problems or are you pushing to submissions that might be the only thing other than that if you still see a problem log this response see what the response is debug from there all right that's done this one I'm not able to start I've started this oh there you go we use cues to have processes in que whenever the worker is free it will pick up the task from the queue correct redis is for caching data to make systems optimal does not make it useful so it does not make not so useful requests it does not make duplicate requests it does not make if this is your postgress so it doesn't overwhelm your data base with reads if you want to you know cache some of this reads in reddis it's not useless information it's just yeah protecting your database with you know a lot of rates and reddis has also cu's thingy which will save multiple incoming requests and process them accordingly it also a qes yeah can you elaborate on this whenever client on website makes a get request primary backend will make a request to the database and send it to the user push that data in redis in datab base so whenever I make the same request and DB did not change it will get go from redis whenever the client on website makes a get request a get request primary backend will make request to database correct and send it to the user and also push it to the data to redis correct so when I make the same request and DB did not change it will get from redis yes but you usually add an expiry you make sure by this data is cast for 10 minutes or you write your code really well so that whenever there's an update happening you clear redis as well but ideally you just add expiries it's fine if one request goes to the table every 20 minutes or 30 minutes so you there is some expiry here isn't AWS sqs same as red is sqs simple queuing surface is the same as a redis q redis provides you a database redis provides you a pubs up more things it provides you sqs is simply a q SNS is similar to messaging Q I guess no so SNS even though I've not used it my guess is it's more similar to a pub sub it is a notification system sqs simple queuing system SNS I have not used it but 90% 99% sure it's this simple notification system this is used for maintaining a Q This is used for KR there will be a notification system I will notify on a topic even though again I haven't used this so you know don't quote me on this do Google this okay this does the same thing but my guess is it does I will put something on a topic and then you know other processes can consume from a topic so you know SNS is probably more similar to a bpub sqs is more similar to a q both of these we' have discussed in redis right now all right
Info
Channel: Harkirat Singh
Views: 71,313
Rating: undefined out of 5
Keywords:
Id: IJkYipYNEtI
Channel Id: undefined
Length: 263min 31sec (15811 seconds)
Published: Wed Apr 10 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.