Event Driven Programming with GO and Kafka

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys I've been working on this video for a while because it's very hard to make a video on golang and Kafka because if someone who doesn't really understand Kafka or why it's really used ends up watching this video they'll end up getting completely confused a few minutes into the video so this means that before I can show you how to work with Kafka in goang I first need to explain what Kafka really does and why it's used so in this video I'll try to cover Kafka in as much detail as possible and then we will look at the best ways to work with Kafka in goang so if you already know Kafka really well I would still recommend you to watch the first few minutes of this video because you'll get a solid recap of how Kafka works just to tell you that this video is part of the 53 goang project series which will become 54 now since I'll add this video in it and the idea here is to learn goang by building stuff so I've created 54 projects for you these are real world projects you can build and Learn goang by building stuff they're arranged in the increasing ing level of difficulty so make sure you build as many as possible and share this with your friends because all of this content is free and don't forget to subscribe because I'm always building awesome stuff so now back to the video now I've been teaching Kafka and even different architecture for a long time and I wanted this video to be the best on the internet for this and I know that the best way to explain anything is with stories so I'm going to attempt to explain everything with the story so it's uh 2024 you're a software engineer and you get a new idea for a new web app like all software Engineers always do uh you know that everyone and their mom is building something in AI so you decide to build an AI driven application and you end up building a nice software for this you start with building a monolith and much to your surprise this actually takes off and a lot of people start using it but you soon end up realizing that a small issue anywhere in the application leads to the entire application going down but you forget about it because you get a little bit of Revenue and you decide to start your own company and two more developers who were your colleagues earlier end up joining you and you all work on the same product together you then realize that there's not enough separation of concern between different team members for example if one developer wanted to update and fix only one part of the software he's actually having to download the entire software onto his local machine understand the entire software and then make changes the software is also very tightly coupled means you you can't just change things in one module without affecting other module so you end up breaking the monolith into microservices by the way there's a lot more details on different architectures you can follow and the actual mechanics of how this works is there in my series The Tech architect course don't forget to check that out so after you have set up your microservices architecture you now need to think about how the microservices really share information when you start out you usually don't end up realizing how much data needs to be actually shared between the microservices so you end up calling apis of one microservice with another this works for some time you scale to a few thousand users and have five to six microservices and Life's good but soon things start breaking apart because you just cannot have that many API calls between your microservices but you don't know that and to fix it you end up adding more developers to your team but surprise surprise this doesn't help so you come up with a genius idea why not make the front end smarter and keep the back end dumb and make the front end call each microservice and get the required data and combine everything in the front end this again works for a while until the number of microservices increased to about 10 to 12 and then you start seeing a lot of latency in the front end servers and they start loading slower and you notice that while you're putting a lot of load on your frontend servers the world is actually moving to serers side rendering and you wonder how are they even doing it so you talk to the engineers at some established startups and you come to know that they're actually relying on scalable ways to ensure communication between their microservices something called as an event-driven mechanism which enables your microservices to respond to events in real time so now you want to understand what events are what event driven really means and how is this a scalable approach an event records the fact that something happened in the world or in your business an event has a key value timestamp and optional metadata headers here's an example Event Event key is Alice event value is made a payment of $200 even timestamp is June 25 2020 at 2:06 p.m. now this gives you a lot of freedom because now one microservice can create an event and another can consume it the consuming microservice will ideally trigger a response to this event and perform some operation and this is a completely different way of working between microservices let's go through an example of an e-commerce product let's say you have three microservices a users microservice that has all the data for all the users another microservice which will have all the data of the products on the e-commerce platform and an orders microservice with the data of all the orders placed by the users on this platform each order will have a list of products ordered by the user now the ORD microservice obviously needs the data of the user and the products that he's purchased and with the old approach for the orders microservice to be able to access the users information and the products information it would have to call the users as well as the products microservices every single time and this would increase the calls between the microservices but with event driven architecture there will be events flowing through something called as a stream and the microservices will be publishing and subscribing to the events and the platform can work in a completely different way you can have events created for when a new user signs up to when a user is browsing specific products to when a user adds products to a cart and all of these can be used to trigger different actions in any other microservice with the event driven way since we have more freedom we can change our architecture a bit let's say that we now have a cart microservice which holds user IDs and product IDs temporarily this is being read by a new service called the payments microservice then the payments microservice emits events as soon as a payment is made by user with the particular products the details of which we got from an event sent by the cart microservice before the payment was made in this new event with the users ID the product details and the payment details are posted by the payments microservice to the stream where it is then read by the invoice microservice where it triggers an action to generate an invoice for the user now the invoice microservice publishes a new event object to the stream with the users details and the product details with the invoice ID and this goes to the orders microservice this approach flows better is more organized and you can easily add more microservices without any issues you simply have to define the events it'll publish or consume and the actions that'll take place based on those events an important thing to note here is that all of this is happening asynchronously which means that the producer of the event doesn't have to wait for the consumer to consume the event the producer's responsibility is just to produce the event and it can then forget about the thing and consumer can consume when it's ready so now we know what events are what's an event driven architecture we have seen how events flow through a stream and how all of this happens asynchronously now that you know all the jargon it'll be quite simple for us to learn about Kafka Kafka is basically the stream where all the events will flow on Kafka microservices can publish events microservices can subscribe to events kafa can store events for later retrieval kafa can replay events later in the event of a failure kafa can transform or process events Publishers can sometimes be called as producers and consumers can sometimes be also called as subscribers so these are interchangeable terms events are organized and durably stored in topics topic as the name suggests has events of a particular topic an example topic name would be payments where all events related to payments will be stored topics in Kafka are always multi- producer and multi subscriber a topic can have zero one or many producers that write events to it as well as 0 one or many consumers that subscribe to these events events in a topic can be read as often as needed unlike traditional messaging systems events are not deleted after consumption instead you define for how long Kafka should retain your events through a per topic configuration setting after which all events will be discarded cfa's performance is effectively constant with respect to data size so storing data for a long time is perfectly fine when you run Kafka it usually runs in a cluster and each cluster consists of multiple Brokers a broker basically is a server and we have multiple brokers in Kafka because Kafka is highly fault tolerant meaning it's difficult for it to go down and having multiple Brokers ensures high up time since even if one fails there are always others and this is essentially how how distributed systems work so if you've heard of Apache Cassandra that works like this too every broker in the cluster has metadata about all the other Brokers and will help the client connect to them as well client in this case would be our microservice which would need to connect to KRA now since you have multiple Brokers and some of them might go down and new Brokers might join the network so the next important thing you want to know about is the Kafka zookeeper the Kafka cluster is managed and coordinated by Kafka Brokers using zookeeper when a kafa cluster changes zookeeper notifies all nodes for instance when a new broker enters the cluster order broker fails zookeeper notifies the cluster moreover zookeeper facilitates leadership elections among broker and topic partition pairings it also AIDS in determining which broker will serve as a lead for each partition and which Brokers have identical copies of the data to help Kafka scale we use partitions topics are separated into partitions inside the kafa cluster and the partitions are rep licated among Brokers a topic can be read in parallel by many consumers from each partition producers may also attach a key to a message which directs all messages with an identical key to that division messages carrying keys are Round Rob and transferred to partitions but messages Sans keys are progressively added and stored inside partitions using keys you may enforce the processing order of Communications in Kafka that contain the same key now that we know quite a bit about Kafka we can now start with our goang Kafka project we're going to create a producer and a consumer with Kafka we will Begin by creating an API using fiber and exposing it by starting a server so that we can simply call the API and this will produce an event from our producer and then our consumer will also be a server which will also be able to consume the event so creating an API makes it simpler for us to test out and produce and consume these events now before we move on to the visual planning for the project just want to tell you about my six AI goang projects Advanced course it's 26 hours of killer content where I share detailed planning exercises for each of the six projects and we build real world production quality stuff like a terraform assistant a kubernetes co-pilot a terminal AI assistant and much more so if you're looking to get a great job in 2024 the main skill you would need to have is to be able to build AI powered software and there's no resource like this on the internet the link for this course will be in the description of this video make sure you check it out now this is what we're building we're building a producer with goang Kafka and a consumer so let me take you through what we're doing with the producer uh we using Fiverr to start office server on Port 3000 creating an API Group called SL API slv1 and the route will be/ Commons which will be called with the help of the post method which hits a function called create comment which uses the comment struct that we create uh this will Define what a comment looks like and uh this function will is going to initialize this comment instruct is going to pass the request body so we'll pass we'll send some request body into the comments that we post so it'll pass the request B into the comment struct and then we'll convert the comment into B and send to profka and we'll do that with the help of this function called push comment to Q which literally does what it says it push pushes the comments on to the Q and we Define the broker's URL because we know that there are multiple Brokers so we Define which broker to use we uh create a producer message and then we call the producer. send message which sends the producer the message on the behalf of the producer you call the connect producer uh function as well in this which create creates a new sync producer for us so this is the flow we'll follow for the producer and for the consumer what we're going to do is we will have uh we will set the topic and we'll call call this function called connect consumer which uh calls sama. new config and sama. new consumer I'll talk about sarama in uh in just one second and then we consume the partition we talked about partition already so we'll be able to consume the errors and consume the messages and uh then we'll create a channel to consume those messages and to also look for um events if somebody's trying to stop the like like a termination message right so this is what we doing in the consumer this is what we doing at the producer s so s that I just showed you is a go client library for Apache Kafka so we'll run Apache Kafka in uh our Docker containers and then we'll be able to connect with Kafka uh inside our goang servers with the help of sama that's why we're using it and it's uh widely used very easy to use and you don't need a lot of hard work uh to use it and it's very simple to use also so that's this is the reason why we're using it it's also very stable and you can uh work with it in production also and like I said it's very very widely used so uh let's get started so I'm going to create a new new directory called Co Kafka YT and I'll CD into it and I will run the go mod init command to uh create a mod file for me and I will open this up in my vs code so I have the go mod file here with me and when you create two folders one is the producer the other is the worker or the consumer and in the producer folder what I'm going to do is I'm going to create a file called producer. go and in the worker we'll create a file called worker. go now I'm building this project on my WSL uh open 2 which which is running on Windows and and I'll but I'll be running the project on my um virtual box the reason is that we require uh Docker to start a uh Kafka instance and with WSL things can go wrong many times when you're running a big blocker container like the one that's required for Kafka and I've had multiple instances where the WSL has completely crashed so what I'll do is I'll build the project here I'll test it out I'll fix all the goang related issues and then I'll import everything into my um uh virtual box and then we'll which is also running an UB 2 instance that's where I'll run the the Kafka and Docker and then I'll run the project there okay so uh and I suggest you do the same if you using WSL uh kafa can lead to problems now I will uh first start with package Main and import and I'll import uh a few things that I need uh but not yet so the first thing that I want to do is I want to create the comment struct so I'm going to say comment struct and the comment is just going to have text which is of type string and uh is going to be form text Json text so the name of the field is text in form also when I'll send it and in Json also it'll be text okay now I'll create my main function where I'll create my new server or new application with the help of fiber I'm using the fiber Library I've already have created a lot of projects around fiber on my channel in case you haven't seen it I highly recommend you do do check it out I created the API Group and we just following the script right now we're just going through the diagrams that I showed you so here I'm going to have an API called comments and that's going to call a function called create comment and I'm going to start off my server on Port 3000 awesome and uh the first thing that I'm going to do is I'm going to create this function called create comment so say create comment function takes some the fiber context and Returns the error so I first initialize a new instance of the comment struct and I get that in this variable called CMP I use body parser and I'll print out any error here and the message will be error and return the error all right so I'm going to uh get the body from so as you know if you've used fiber before you know that the body uh that you've sent to the API the request body will be available through the context which is C now because C is the fiber context and we uh convert that into the comment um struct so it's going to convert whatever body that I'm sending into the format of the comment struct that I've defined before out here and then I'm going to use Json the Json package to Marshall the CMT that I just created into CMT in bytes I need these the the comment to be in bytes so that I can send it to Kafka to send it to Kafka I'm going to use this this function called push comment to Q I'm going to create this function in a while and what I'm going to send here is the comment invites then I'm going to say c. json. map this is when everything goes right I'm going to say success is true that means it happened and and after comment has been pushed I'll say comment pushed successfully and the comment is going to be CMT comma and yeah and then I can handle the error error is equal to and here we'll check for the error if eror is not equal to nil C do status so if there is an error I want to return a 500 here and then with fivr I'll just have to Define what that message is going to look like now this I'm not creating a production level application here as you can see it's a small uh test application that I'm showing you how to work with uh Kafka and goang if was if I was creating a production level application I would have much better error handling right now it's just a little scrappy I'm just creating these Json messages for fivr but it gets the job done and return error okay so now this is this is it this is all that our producer. go file is supposed to do from a create comment perspective but obviously as you know we are calling some uh functions like push comment CU which we'll have to now create uh so what I'll do now is I'll go ahead and create this function called push comment to to Q so function push comment to Q topic and the message which will be in the format of [Music] Dives and the first thing you'll need is the broker's URL so I'm going to hard code it here but you can have an enem file if you wanted to 290 92 and these will be curly braces not a bracket and I might make a lot of typos and mistakes while coding don't worry because I will run this code at the end and I'll fix everything uh this is how I usually work I start with I I work like a painter you know you start with broad Strokes in the beginning uh if you've seen any Bob Ross videos you probably know what I'm talking about you start with broad broad strokes and then you pick up the um the thin brush and make all the details right so that's how you that's how we build it out here as well okay so I I will leave this function called connect producer and all that the function does actually doesn't do much it just helps me to connect with the producer so I'm just going to go ahead and create that function takes on the Brokers URL and takes in Sama the library that I showed you it's it's actually by uh IBM I think producer error and then I'm going to say config is equal to Sama do new config and config do reducer do return do successes equal to two and config do producer do required acknowledgements is surama do rate for all to resolve and then config do user or I'm just setting up all the values like how many times do I want it to uh retry and what kind of uh acknowledg acknowledgements I required all of those things uh then I'm going to use sama. new sync producer that's the function that's available to me in the arama library and I just have to pass it the broker's URL and the config that I just created out here with all these values which uh need to be set and and then I'll just handle this error that I'm that I have that I might have here so if error is not equal to n we going to return nil comma error and return connection Comm now coming back to the push comment to Q function so I've created my connect producer uh here and then here also I'll just handle the error if error is not equal to n return the error and finally going to close the producer so close at the end of this uh function going to create my message so the way the message looks like is it's going to have Sama do produce sir message so here you you'll get this function right from Sama Library that's the one we want producer message has topic and it has a value so Sama dot dot string encoder message and then uh now that I've created the message I just want to send that message so that's also available to you me in producer so I'm going to say producer dot send message and I'll send the message that I just crafted out here and I'll get back partition offset or I might get back an header and I'll handle the error now error is not equal to nil I'm going to say return error and and I'm going to print out def do printf message is stored in topic percentage s SL partition percentage we going to print out the address of the message toor offset percentage T sln and I'm going to replace these with the values of the topic of the partition and of the offset and finally right so uh I'll quickly go through it again just to be sure just so that you're also on the same page we create a struct called comment which has a value called text which is string and the format looks like this and Json it look like this like the the name of the field is text sometimes it might be t with a capital T in Json while in the form it will be something else just clarifying that it's going to be text in both the places this is my main function where everything uh everything starts from I create a new app from uh fiber. new this is the function that helps me create a new app almost like expressjs if you work with expressjs express. new and you get that in app then I've created an API Group created in an API which uh which works on with the post method the route is/ comment and I call the function create comment here and I'm list the server is running on Port 3,000 now I call the function create comment which then calls the function push comment to q and then which then calls the function connect producer which because to push comment to Q you obviously need the producer uh connection so let's go through it again so create comment takes in the context fiber returns an error uh creates a new uh initializes a new comment struct and then uh whatever body it's getting from the request is going to convert that into the comment and then get it into the B into bytes which I will then need uh to call my push comment to Q function so I'll pass the btes there and then uh if everything happened perfectly fine then I'm just going to return this nice message which is comment post successfully if there was an error then I'm going to say um error creating the product okay and just yeah just returning the errors now because we had called this function push commment to Q let's check that what what that looks like it's going to take in the topic and message and I'm going to create the broker's URL and then I'm going to call a function connect producer to help me connect to the broker's URL which is Local Host 29092 if there was an error while calling this function I will return an error I will close off the producer or the connection to the producer at the end of this function I will craft a nice little message based on the Sama uh message format which which requires the topic and the value and I'm going to then uh send that message so the the way to send that message is using the producer because that is what has the connection to the broker so I use the producer and call the send message function and I send the message which I just crafted here I get back partition offset which are the values that I want to print out to the terminal saying that hey topic partition offset this is where the message was stored so return nil from here return error from here if there was an error while calling the send message function now the Curr producer function just takes in the broker URL and picks in and returns s sync producer or an error and I'm going to uh create a new config and then going to set a couple of values for the config to work uh properly then going to create a connection with the help of new sync producer function which is available to me in Sama uh I'm going to pass the broker G and the config that I just created here with all the values if there was an error I'm going to return the error otherwise I'll return the connection that I just created which is getting used here as you can see the connection getting used here in producer and that's how we did all everything with the producer okay so that was it in a nutshell now let's go to our uh worker. Co file in the worker. go file we have package main again this belongs to the same package going to import a couple of things and uh here I'll have my fun main so important thing to remember is that we'll be running these two uh files on two different servers so you don't need to worry about them clashing because they they belonging to the same package and not being in the same folder so you don't have to worry about that they'll be working on two different servers so here I'm going to uh say that my topic is comments which is where everything will be there and um just like I had the connect producer function I'm going to have the connect consumer function here and I'm going to pass in Local Host 29 um 92 because that's where my broker is and worker is what I get back just like I when I called The Connect producer function here I got back the producer just similarly I'm going to call connect consumer function here and I get back the worker or the consumer okay and we might face an error here so I'm going to return uh the error also and then I'm going to call the worker dot consume partition so I'm going to pass in the topic comma 0 comma Sama dot offset to list so after to pass in the topic and the partition and and they offset all of these values so that I know where the message is I'll be able to consume the message properly and I might face an error here so I'm going to say I'm going to just copy and paste this actually the same same thing again and then I'm going to print out that the consumer has started so I'm going to say fmt do print Ln say consumer started then I'm going to create and this is going to write some code that we usually write to create a channel because I want to listen to termination messages so I'm going to say signal. modify this is for a graceful shutdown as you know Sig comma sis call do Sig in comma sis call do a term listening for ter messages and I'm going to start with the message count being equal to zero first these are the messages that I want to consume from the from the queue or from the stream that's why I'm just going to start with zero create a done Channel where I'm going to notify that hey I'm done consuming the uh consuming from the stream so that's why I'm going to advertise here on the done Channel when I'm done consuming and I'm going to start a go routine which will help me do some tasks simultaneously so I'm going to run through a for Loop select and case if there's an error we're going to uh print out those errors and if there is a message we're going to consume that message so consumer do messages the way to consume that message is to begin with we we'll increment our message counter that I've created here it was zero in the beginning the number of messages we consume we we have to keep incrementing the message con so that we know how many messages we have uh consumed and I'm going to say this received message like message one message two message three right you want to print out to the terminal so I'm going to say uh I'm going to print out the number of messages I've received and the topic that it was on so I'm going to print out the topic as well then the message itself slm and here I'm going to replace these things with the the message count which is which are the actual values which will be replaced here and the topic and the message so the message count then uh I'm going to get the message. topic and finally the messages value message. value all right [Music] and if there is an interruption or uh um the termination message that I that was that I'm expecting then I'm going to have a graceful shut I'm say interrupt Interruption detected and finally I'm going to advertise on the done channel that hey I'm done consuming the messages on the Stream and out here is where uh something seems to be wrong because I don't think I have closed the right number of um brackets I don't think it should be there actually and uh I think something something seems to be wrong cuz I have the bracket for for bracket for select yeah and the bracket for the um gtin now this looks okay and I'm going to call the routine and then the main function I just closed it with this bracket okay so now everything looks all right then I'm going to just say done Channel and I'm going to print out the message count and the messages and close the worker and print out and panic with the error so this was our main function and now I just need to create this function which was connect uh consumer function so let's do that let's C that quick function quickly so connect consumer broker URL takes in the string and Sama do consumer error and here first things first I'm going to start with the config just like I did in the create producer function creation of a config and then setting some config values consumer do return do errors true and then I'm going to use Sama to create a new consumer this is going to have the broker's URL and the config I'm going to get back the connection to the consumer or an error I might get an error back and what I'm going to do is I'm going to print the error and going to return connection comma new all right so this is your worker and that was your producer the spelling of producer is wrong out here and I'm sure they're like countless countless mistakes in the code uh I'm quickly going to oh it's by Shopify so by Shopify but I remember seeing IBM somewhere I think IBM also has something to do with Sama not sure but anyways in the producer I'm going to import uh fivr somehow I didn't end up importing fiber I need fiber I'm going to go ahead and run go mod that's the first thing I'm going to go ahead and run go okay uh yeah see something is wrong right it says IBM Sama or Shopify Sama what's the what's the see here again was it was it like is it used to be Shopify just like a few days back back because when I created this project and then and now suddenly it's IBM is it like that not sure let me check it [Music] out yeah it is with IDM now that's for sure uh not sure if you can see my screen let says IBM Sama that's weird because just a while back it was Shopify anyways um so now what we'll have to do is we'll have to change the path like it's saying like it's telling to which is uh IBM Sama is what I'm going to have to change it with IBM okay and Here Also let's say IPM right all right so I'm going to go ahead and run Goot ID again and now it's it's getting all the all the dependencies for me and now I can still see some errors are there so that's what we are here for we have to solve all these errors so we going to say CD producer producer go and go run producer see what C issues okay so it says uh line 74 and line 75 there are some issues no problem we can uh we can check that out line 74 sorry producer line 74 is comment okay okay this is the create form function I have Su line 74 success message and comment oh yeah there should have been a comma out here um should I try running it again yeah I think this is running now shouldn't be a problem anymore and I'm going to now fix the sorry uh go run worker. go let's see if there are any goang issues there are on line 43 and 46 so I'm going to go ahead and put line 43 and 46 okay uh what's happening on L 43 yeah this shouldn't be like this right line uh 43 and 46 okay okay okay so this I'll fix but what's there online 46 that's that's what has been confused this is usually sorry this is fixable St online 46 seems to be all right is it because there was an issue here and that's because of that there's an issue 946 is that is it because of that or not sure I'm just going to go ahead and run it anyways uh so okay line on 26 and 35 also there are some issues line 26 okay what's the issue here um oh yeah six Chan channel so it's call. Sig in call ter signy 26 feels okay now 35 it said 35 right 35 consumer. error what's the issue there or it should be erors multiple oh let's run it again um okay yeah that's that's perfectly fine because we don't we don't have C there so not a problem I uh so at least it told me this but uh I didn't get anything back from the producer and that has me concerned proder go does it doesn't say anything right which um I think should be okay I think it should be okay but let's go ahead and test it out in my um inside my virtual box and let's see what happens so here as you can see I'm in my virtual box now I've open up the terminal I have run the pseudo Docker compose up minus D command so uh the thing is the code for this project I've already put now on GitHub and you'll be able to uh run to compose minus up minus D is what you have to run in the root directory and this will start your Apache Kafka and Zookeeper okay so I've done that and I just have to run the producer. go and consumer. go now uh and then this is the Goa example in ail sh9 that's the name of my account anyways so now what we'll do is we will go ahead and run um here I've opened up the producer so I'm going to say go run producer. Co and here I'm going to say gun worker. go and here I have my um terminal open where I'm going to go ahead and put in the curl commands it will call it'll call the uh API so uh I just realized I can't copy between my WSL and my virtual box I'm going to go ahead and have to um open up the browser here and get access to the apis so I have opened up in my browser here the project and now I can just copy the uh curl commands I was too bored or lazy to type everything on my own again the old C command but I've already put it in GitHub why not so here I get the uh reply back saying command pushed successfully and your producer is running and your consumer just received one message so the count is one topic is comments and the message that you received is message one which is what you sent you sent message one right now let's send the message two and see if it still works so copy message two and here I'm just going to press enter a couple of times and here I'm going to call my uh / API SLV uh V1 which is my group comment is my API Port 3000 is where the project is running post is the method and I'm going to send text message to and data raw text if you remember that's what the comment is looking for and when I press enter I'm going to get comment post successfully Success Through text message to and I get received message count two topics comment topic comments and message text message to received successfully so I hope you enjoyed building the project along with me and uh like I said this is on GitHub make sure you check it out uh check out the code if if it's it's if you're not able to make it work and obviously join the Discord Community for this YouTube channel because that's where we discuss issues and errors and even career advice and how to find jobs so we talk about all that kind of stuff make sure you join it the link for that also will be in the um in my profile on YouTube and the link for the course that I talked about earlier which was the six AI goang projects course is also there on my profile and also in the description of this uh video now the other thing is I have um on my channel the playlists for rust for AI for um technical architecture and for uh system design make sure you check those out also and don't forget to subscribe thank you so much for watching and I'll see you in the next video
Info
Channel: Akhil Sharma
Views: 8,454
Rating: undefined out of 5
Keywords:
Id: j6bqJKxb2w0
Channel Id: undefined
Length: 47min 51sec (2871 seconds)
Published: Thu Mar 07 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.