Introducing Docker’s Generative AI and Machine Learning Stack (DockerCon 2023)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
everyone please welcome Docker CTO Justin corac hello welcome to day two of Docker con are we having fun are we learning new things y we had a great day yesterday we talked a lot about the growth in our community the new things that are coming better performance faster build doer Scout dockor debug Open Pub Key all sorts of new exciting things um it was really great fun we had a great day together learning it was great to meet you all um really exciting uh really looking forward to day two today we're going to talk about the future of application development and a decade ago when Docker started we came into the world we burst onto the scene and we changed the world of application development forever and it was a pretty exciting time we're all in this industry because we love the constant rate of change the constant excitement the constant new things last year large language models brought AI to everyone's attention there was a huge change like Ai and ml have been around for a long time but it was a it was a step change a sudden excitement explosion of creativity explosion of new ideas and exploration and everyone wanted to get involved and there were two things we learned to Docker about that one is that the whole build chair run model and the whole Docker tooling was really really really important in this new AIML world people you know wanted to still be able to ship reliably to production the development experience was really important all that was really you know there was a lot of continuity with the decade of change the second thing you told us was that you really wanted to get started and learn this new stuff because it's a whole new text stack a whole new set of things and lots of so many people were excited and wanted to explore and learn because this was just really really exciting so we got together with a bunch of our uh community and um the people we love to work with to bring you a great get started experience now you don't want me talking you want demos don't you so let's let's build with daa let's um welcome Harrison from Lang chain Michael from neop and Jeffrey from Alama um and let's H have a [Applause] demo hello Docker con I am so excited this morning to build a new generative AI application easiest way to get up and running with this is the docker geni stack excellent so these are these are the partners we've brought together and you got to imagine that this is the development team at a company called recursive Inc um they build NextGen software they're a pretty Rockstar Dev team I'm pretty uh pretty excited to have them on board at the recursive now recursive does NextGen software so they have a lot of customer support questions they have a big customer support portal because people have a lot of questions about nextg software building it building a customer support application for recursive is what this team have decided to do they've decided to use llms to build a new um stack to help people answer these support questions recursive has a great big um database of internal questions from many years about um NextGen software and things if you look closely at these questions you might recognize them from stack Overflow but hey we all copy things off stack Overflow sometimes don't we so um the team are going to get started they're going to use Docker compose because that's a great way to get started um and we've got a stack which has a um an Al llama model that's a the language model from Al llama we have um Lang chain um to hold it all together and build the application and provide observability and then we have Neo forj providing the database for the um the extra data we're going to bring in so let's go it all starts with a familiar command you know and love Docker compose up I'll go ahead and run that and you can see the different components of the geni stack being brought up the first component is the AI model and for this we're using a new tool named AMA to download and run the popular llama 2 model by meta the second component is the vector in graph database named Neo forj and lastly we have a python application powered by Lang chain the orchestration layer for Gen applications looks like these different components are now up and running so I'm going to go ahead and open this up in my browser I'm greeted with the screen where I can uh prompt and ask questions and chat with this model so I'll go ahead and ask it a question how do I summarize PDFs using Lang chain it's a bit of a trick question it seems like the model is giving me an answer and it's all running locally here on my Mac unfortunately though the model doesn't really seem to know what Lang chain is or how to use it so how are we going to solve this problem for that I'm in luck because my teammate Michael here can show us how we can augment this model with the knowledge it needs to answer the question correctly thank you Jeff so uh as as Justin already said we have this large knowledge base of existing customer question and answers uh that we want to integrate into our application so our our second app that we're running here is uh the Importer app uh this one uh that basically allows you to uh pick any tag that you're interest tested in from the knowledge base that we using this kind of public knowledge base for the demo and it will import the this data into a into the knowledge graph into the graph database vectorize it for for the vector search and then it's ready to be used by the Gen application so if you import this data into into the knowledge base it basically looks like this uh so you see uh the um purple ones are questions uh the orange ones are tags uh that we imported here and these are all the Lang chain questions uh from from SE overflow right so we should be now able basically switch the mode of this model um to include this information from the database send it together with the user question to the llm and have the llm generate a useful answer for us so let's see how well this works so I'll use the same question uh that Jeff just used I just copy and paste it so I don't have to type things and uh put this in here and and switch to rag enabled so rag stands for retrieval augmented generation that means whatever I do with the llm is not based on the data that llm was trained on but it's only using the language skills of the llm completely ignores the training data and gives you it basically all the context information to generate the answer as part of the prompt right so and that's what we do here we take the question from the user turn it into Vector query database then from the knowledge graph pull all the relevant cont text from this information and send all this text information to to the llm which then uses that to answer this question as such and as you can see now it suddenly knows about blank chain it knows about uh the pipe PDF loader and um embeddings and and Vector databases and what's especially cool is you've all seen uh cat GPT make stuff up and then it never was able to produce sources for the things that it uh that it made up but here because our information comes from a database we pass along with the information also the source links so I can basically now just open one of these Source links and you see that it goes back to the to source of the question so that becomes much more trustworthy and reproducible and verifiable in terms of answers right but what's happening behind the scenes what's happening under the hood uh Harrison will show us how we can actually peek behind the corners and also look at the code to see how we operate behind the scenes so as Michael mentioned this is using a technique known as rag retrieval augmented generation where it pulls in extra data um and so so this is an example of a chain that Lang chain helps orchestrate which is basically a template to create an application in a few lines of code and so to show exactly what's going on I'm going to use Lang Smith which is our observability and tracing platform to show the difference between the first chain which didn't have rag enabled and the second one which does so if we click into this this is the first chain that jeffan and so we can see that there's two components to this chain there's a prompt template and then there's a call to an llm and so let's make this a little bit bigger so we can see exactly what's going on if we click into the prompt um we can see that we have our input here so this is the question that was asked how do I summarize PDFs using LinkedIn and there's there's no chat history but this is also a conversational bot so enables this this back and forth iteration if if we want to do that we can also see a bunch of metadata about the run so we can see the the serialized version of the chat prompt template here um and and and then we can also see the final thing that the chat template produces which is these two messages a system message and a human message these are ways to pass inputs into the language model and we do that exactly in the next step where we interact with chadow Lama which is the Llama model we can click on the metadata field um and we can see a lot of information about the AMA model here so we can see the name of the model that we're using and all the parameters that are going into it and where it's being hosted which is locally if we go back to the Run we can see the exact input to the model here system and human and then we can see the output and this is the same thing that was generated in the UI let's now look to see how that compares with this other chain the retrieval QA was s forces chain if we expand the full Trace we can see that there's more steps there's first a retrieval step so let's click into this we can see that we have an input query how do I summarize PDFs using linkchain and then the output is now a list of documents and so this is using neo4j to look up the list of documents behind the scenes if we look at the metadata if we look at the little side panel here we can see that this retriever is tagged with hugging face embeddings which are open source embeddings as well as the neo4j vector store so we can see exactly where this retrieve data is coming from now if we click into the chain we can expand it to see exactly what's going into the language model and so we can see here that we now have this system message that instructs the language model to use the following pieces of context to answer the question at the end and then we have the documents from stack Overflow or not stack Overflow um which we're using using to ground the language model in our response and here we get back the the AI assisted answer so this is hopefully a good sneak peek as to what exactly is going on behind the scenes we'll be doing a much deeper dive on this at 11:45 right here in this room so I'd encourage you guys to come here and check that out if that's interesting um and and with that I'd like to thank uh Docker for having us and and and yeah encourage everyone to explore gen on Docker you thanks that was so cool and it was exciting just to be able to learn and do do things like that you can try this out yourself click on the link in the Learning Center of the latest Docker desktop or go straight to the GitHub repo and you can put your own data in you can change the models you can play around with it you can rebuild it you can rewrite it you can do anything you like with it so really really cool um and there's a deep dive talk 11:45 if you want to learn more and all of the team at their booths and can help you as well if that inspired you to build with a ml join our hackathon we've we're running hackathon for the next month um there's a hack space down here there's online support there's people ready to help you with all your problems um it's going to be really exciting let's build some cool things this if this is inspired you you can build something like this or there's many many other things you can build with with Docker AIML and and um this exciting world of geni now we love to work with our customers and understand how Docker is helping them build things more effectively faster and then build amazing stuff it's always so cool spending time with them um we we um we we spend a lot of time talking to customers working with them um I'm sure you've heard of this customer I'm sure you've assembled their products but let's learn about how they assemble their own applications with Docker and AI so um welcome Karen um great to see you great to see you hello everybody how are we doing today at doer con slightly more people than yesterday maybe all right uh glad to be here see you in person and speak about our ml platform but before we do that uh who are we uh we are Inca group we are the retail arm of IA which means we are the ones that are managing running the different stores and and pickup points that you've been to or planning Studios and we are essentially a 42 billion Revenue company um and headquartered in Netherlands we are over 177,000 co-workers across these different stores and different formats that we have and we get about 3.8 8 billion uh in terms of hitto ik.com and we operate in about 31 different countries apart from that the physically we are in about 482 locations both as large format stores small format stores planning Studios pickup points and so on apart from that we also have a shopping Center business across 14 countries where we manage shopping malls uh which have I stores in them we also believe in creating Better Homes and our products and our range reflect that and and you will see for example a product like aacan which reduces 95% of tap water usage when it's used as a nosel we're also a value driven company and over 80% of our co-workers believe in our values live our values we also have uh sustainability and circularity at our core and the last years we've refurbished and sold about 32 million products uh giving them a second lease of life and apart from that we're also quite diversity focused uh at our leadership levels we have a 50/50 gender balance between male and female apart from that we are also big on Corporate social responsibility we always try to help the refugees by making them employable as well as react to situations like for example the Ukraine war or the Morocco earthquake apart from that uh we now head back to the impact of uh AI Ikea I mean we are like any other retail company right I mean we have the same sort of systems same sort of problems and we also have ai in our strategy so what that implies is that we have been investing to build a nice Thea analytics organization quite comprehensive which is embedded with product teams product teams that build digital products for the many people however the idea of these themes was that they're small they're self-reliant and develop with speed but what has happened is that we've run into different issues where it is becoming more and more of a challenge to deploy a model from idea I mean as it is we have a 20% idea to deployment rate but with these challenges uh we started to drop even further so what are these challenges essentially the infrastructure footprint I mean with the growing number of teams and growing number of uh sort of AI use cases or ideas we started to run into challenges with with having the right compute available right infrastructure available right when we were having our servers we used to have one model per servers big model less compute small model too much compute and did we say governance yet I mean that's that's another whole ball game now coming to development uh before when we started out it was my machine is my compute I need a python a jupyter notebook and I'm good but now with Gen and and stuff that you already heard it starts to become more and more complex with the computer that is required we need to have a GPU we need to have multi-tenancy so that multiple people can work together on the same computer and that is challenging to do as it is today and then of course uh We've also had a huge transformation in how models are deployed right I mean first of all they're isolated from the software layer and then they could be deployed in Canary blue green and so on which essentially means that we have challenges with scalability when the model gets suddenly a whole lot of data coming in portability which means the model needs to be present across different environments and on the edge and so on and the isolation piece because we need to be able to separate it from the software layer and yet maintain the same efficiency of like less than 5 milliseconds for instance for um live models which which run on ia.com for instance basically all of this coming together making it a sort of a complex situation with one big impact a idea is not moving but more important ly we take more and more sort of time to deploy models and we have not even spoken about the data problems that come right that that's for another talk now moving ahead right I mean uh we heard the challenge what was the solution right I mean it was not all dark that's where I say Docker entered the game and started to help us I mean there was a time when we were for example getting requests for development environments and what we were essentially doing is allocating servers from the data center at times with people sharing them or people would use VMS in the cloud which is pretty standard and and start to share them but this had to change and this started to change when we started to containerize and basically use Docker to give container as a data science environment for people to experiment let's take an example of a recommendation algorithm right I mean we are now helping the teams with these different Dockers in which they experiment the different recommendation algorithms that might be required right call them the AI training lab and and after this vast experimentation they end up with what's called as strain models these are the with high accuracy performing to the business case and then these then come to us as model Registries where we learn what's the metadata the data versioning the model versioning and everything surrounding the model that needs to be deployed now we combine this model with Seldon core and then what we start to do is we deploy as requested mean can be a blue green deployment Canary deployment or even deployment in the edge right so moving ahead so this this was the first step of our transformation right I mean you heard for example how uh Docker is uh helping containerize development environments that allows us to you know deliver better models at a faster pace and then you heard how for example Docker when combined with Seldon core is able to deliver a deployment platform which allows us to deploy at scale now when we combine that with something like a Prometheus we start to get the monitoring monitoring not just for the models the model performance but also the platform's performance right and it starts to become important because we need to have the model available 99.6% of the time for instance and when we combine that with the for example sell on Alibi we start to measure drifts and so on adverse outliers which is essential to you know look at a future of automated retraining pipelines and all of this comes together with with Ikea's engineering and and we will Deep dive into this whole platform in in a talk at 10:45 today but that is essentially how the whole platform is built and that brings us to one most important thing what did we gain from doing all of this changes I think the biggest gain has been that we're now 10 times faster from idea to production I mean we still have a 20% conversion rate in terms of uh idea to production but at least we are faster in doing that right I mean we've gone from being 100 plus days to being less than 10 days to have a Deployable model and most importantly we are also able to platform it scale and the second thing and then for us as a platform team the biggest thing is how do you your users actually look at your platform and and what do they use it for and we were uh -10 for a long time which means that our users were not really recommending our platform to other users or other data scientists that has changed over the last two years and now we doing almost plus 18% and have more than 100 people actively using the platform today and deploying models observing models and so on that's my time uh thank you so much uh we're sticking around uh for the rest of the day as well so happy to meet and connect with all of you have a nice day and I give it back to Justin thank thanks so much it's so great to hear our customers having success with Docker it's what makes it so satisfying to work at Docker it's why we come in every day to help you and and it just it makes us so happy to see you being successful Behind these last two demos and the the demos you saw earlier and the and the Karen's were we saw a lot of Docker trusted content being used to build secure efficient applications and um britne is going to come on and talk more about the work we're doing with secure and trusted content for AIML applications because this has been the the real foundation of the success of many many of your applications and the customers so welcome [Applause] Britney thank you hi I'm Britney just as Justin said I'm a product manager at Docker and I'm so excited I get to talk to you all today uh for the last 10 years Docker has helped solve some critical developer pain points such as reproducibility and portability but today I'm going to talk to you about another pain Point trust and how some of the same Solutions we've delivered to the community for years can be applied to machine learning development can be a complex and timec consuming process you should be able to trust the tools you're using to automate tasks and streamline your development Docker trusted content delivers Dependable content from known sources to help developers build with secure foundations while using unknown public images can POS security threats and jeopardize your machine learning models with Docker trusted content you can code confidently knowing you've started your application development off right let's dive a bit deeper into the the categories of trusted content available on Docker Hub how many of you have built an application using Docker trusted content Justin I know you have I can't see how many virtual hands are raised but I'm betting it's a lot of you did you know the engine X Docker official image has been pulled over 1 billion times to build countless applications Docker Hub and trusted content have been a staple in the developer Community for years and now you can use these same trusted sources for AI and ml development first we have Docker official images these are open sourced and maintained by Docker in collaboration with project maintainers Docker thoroughly reviews the docker files ensuring they are secure and reliable this means that official images are kept up to-date with Upstream security patches and Bug fixes they also support multiple architectures providing even more value and accessibility to developers because DOI are built using a consistent process are updated regularly and are tested against a variety of criteria they're often more performant and efficient than custom built images next we have Dr verified publisher images these images come from trusted Publishers who strive to meet industry standards and are subjected to security assessments additionally dvps are kept up to date you can identify them by the blue check next to their image name last but definitely not least we have Docker sponsored open source images which come from open source project that meet specific criteria which includes confirming their a verified source that they have OSI compliant licensing and are updated regularly Docker is deeply committed to supporting the open source community so the projects in this program receive a range of free benefits and we're adding a new benefit to that list starting in late 2023 all Docker sponsored open source program members will receive a free team subscription to Docker Scout which will play a critical role in building a more secure software supply chain to sum it up through rigorous review and support for our Publishers Docker trusted content offers the most reliable experience for developers and by extension the machine learning community so let's talk about how this relates to you hundreds of AI and ml images are available on dockerhub verified images from industry-leading Ai and ml tools including the Gen projects we just heard from neo4j Lang chain and Alama provide trusted content to ensure a strong starting point for machine learning deaths in fact just last month the AI and ml content on dockerhub received 34 million pools here are just a few examples of how you can bid out your AI in mlsc with trusted content for languages and databases you can choose between R python neo4j all official images for models and Frameworks you can start off with a llama Lang chain tensorflow pie torch or in in open Federated learning among many more in ml Ops you can choose between Apache spark Jupiter data science notebook redus ml AWS stagemaker MLF flow the list goes on countless others you can build your entire ml sack using trusted content package it in a container Port it to other machines or external clusters and share it with your teammates they can easily reproduce your results with the click of a button recently a developer shared his experience with me he said I was working on a project a while ago where the maintainer provided a container with everything set up for development and it was literally one command to start iterating on the project I cannot express how happy that made me it was magical imagine you could have that with every data science project that is an experience everyone at Docker believes the AI and ml Community should have with access to a trusted stack along with the reproducibility and portability that Docker pooling Prov tooling provides it's easy to see why Docker and machine learning are a winning combination if you want to learn more about how developers are utilizing trusted content to power their machine learning development check out these blogs on the screen behind me and thank you for your time thanks thanks so much Britney thanks we talked about build and share um now let's talk a little bit about run now a ml you know runs on a lot of the platforms we we're already familiar with but actually it's also opening up opportunities for new places data gravity is a real thing moving petabytes of data around to where your computers doesn't necessarily work um sometimes you really want to move the docker container to where the data is instead and so we're seeing new platforms that are enabling this as well so one platform for example in preview moment is Snow Park from Snowflake um and we we've been we're working with them on bringing bringing docka to this and developer experience to this and we see more of these platforms turning up you know which are combined data container AI ml platforms and we're we're really excited to work work with these new platforms bring bring the great developer experience of Docker to these new platforms and we we're going to see a lot of change in the Run side of things as well and we'll be supporting that already people are running their Docker ml applications in pretty weird and surprising places actually uh so let's talk to someone whose uh applications are likely to get very wet and windy it's a good thing Docker containers are actually quite waterproof I mean it's Wales after all you know so um welcome [Applause] leis thank you Justin thank all of you for being here today at Doan so here we go a couple cple things before I dive deep in I am with Western ecosystems technology we are an environmental consulting company and we focus mostly on the wind energy space and compliance for wind energy also a very import important note from the laal nothing that I'm saying here today is in any way endorsed by or an opinion of the US Department of energy we are of course very grateful for your funding on this project also this was a team effort and these are the rest of the people on the team so let's give it up for them thank you we've been hearing a lot about clouds at dookan lately and when I say cloud a lot of us think of this kind of cloud it has some JavaScript it has some CPUs maybe had some databases it may be a very specific shade of light blue possibly even aure and most importantly it has internet connectivity but at rest and in the environmental Consulting space in general this is our cloud and it has real clouds and some very nice sunsets that are much better than all of your sunsets so to take our machine learning models out into this real world and make them interact with data there are some challenges and we did a pilot study to evaluate how to overcome those challenges and it turned out Darko was pretty useful to us the basic idea of our pilot study in 10 seconds is we wanted to use computer vision to analyze the video of the moment that a moving turbine blade interacts with flying animal and we wanted to do this in an offsh environment now I don't know how many of you have checked the cell service maps for the middle of the Atlantic Ocean but spoiler alert there is very little cell service in the middle of the Atlantic Ocean and that brings up some new challenges that we collect all this data and it can be in the hundreds of gigabytes per day and without strong connectivity to a nice Computing Cloud we that data sort of has nowhere to go and it piles up which means we need to analyze it at the same place and at the same time you collect it which is sort of the essence of airl that incidentally also means we have a pretty high computation load now ajl I don't know how many of you have done it but it's kind of tricky a lot of pieces that have to work together edl and you cannot connect to any of your Edge devices is even mil tricky I would say about an order of magnitude mil tricky also it's sort of paradoxical but it turns out that the best solution for these highly connected Computing clouds I.E darker is also a really good solution when you're doing this disconnected Edge ml the reason for this is that the central problem in both cases is the same the central problem in both cases is the consistency between different software environments now we can without containerization or something similar we can sit around and build some beautiful functional applications on odev machines and send them off to the prod machine and because if the two environments are different the they may not be beautiful functional applications anymore I mean all my applications are always beautiful at all times but they may not be functional and that's a problem especially when you have no connectivity you might send it out for 3 months and you do not want to come back in 3 months and find out that it never even started up because you were missing the version of the library that you didn't even think about you would be very sad if that happened so sort of circling back we did a pilot study to validate how to deal with these challenges we took a camera and we installed it on a landbased wind turbine landbased for Access just because it was a pilot we had many video streams to process all in real time six to be exact in the middle of our deployment we had to do a redeployment and we had very little supervision and because we put all of our basically all of our Pipeline and alliv our ml in Darko containers that really that really proved that the containerization and standardization smoothed the road for us in many many instances and I can truly say that it worked and not just on my machine so if you want to hear about more of those instances while you watching this beautiful video of a YOLO model drawing boxes around flying birds I would like to remind you I'll be talking at length about this project at 2:30 p.m. today thank all of you and thank Darko for having me thanks so much yeah so that that was pretty cool um we again we we love to see what people are doing with doger it always surprises us and uh you're doing so many different and varied things it's really great you know so you've seen all these you've seen these demos um you've seen what other people are doing and you have ideas how do you bring these to your organization how do you go from prototype to shipping something um let's talk about that with Debbie [Applause] [Music] Madden morning hello can we sit yes that's it all right let's use the chairs good morning everybody uh thank you so much for having me um so we're going to zoom out and we talked about build share run now we're going to talk about The Learning Journey yeah and I'm really looking forward to this you want to introduce yourself first though Absol as you yes um so my name is Debbie Madden I am uh several things I wear many hats I am the founder and chair of stride so what does stride do we make good teams great and we do that in two ways by helping you deliver custom software to your customers and by upskilling your teams um I've been an entrepreneur for almost 30 years um don't tell anyone um I am also the host of the podcast scaling Tech I am the author of the book hire women and I'm also a Docker adviser and uh so that's some of the things I do um and in in my day job you can really think of me as a pattern Finder right as the founder of a tech consultancy it's my job as you can imagine over 30 years lots of shiny objects come my way right and it's my job to decide like which to hold on to which to examine and which ones to let go and and how did how did you decide the Gen was one of the keepers and the important ones so the uh I've been I've been asking myself um where where does stride fit in the Gen landscape and I'm still asking myself that question but I really haven't seen a unified movement since agile right and I've been I was writing software and running tech companies since before agile um but when agile came along about 20 years ago that was a shiny object that I held on tight to that was really a unified bottom up movement but with Gen and I think a lot of this today know this and see this on our teams and and in our companies and in our day-to-day this is a movement that is not only unified but it is both bottom up in a how fast can we go and how many cool things that we can do with Docker with our use cases but it's also very um down in a weird kind of fomo way right like the executives and stakeholders we want to go we don't know why or where or how yet but we know we want to be there right and so that those are some of the reasons why I'm excited about this journey but it's difficult to go on a journey when you don't know where it's going yeah yes not only do we not know where it's going but but the rules of the game are changing so fast not only is the adoption curve rapid as we know um but the gold posts are changing every single day and so one thing that I want to just make very clear is when we are we're take we're taking a bet at Stride at generative AI for sure we've decided we're taking a bet but it's not a it's not a switch we don't wake up and say okay we're we're we're doing this now right um Justin and I were talking a couple weeks ago it's kind of like when you learn a software uh tool a technology a process and then you learn a new one right um for me I'm not a software engineer but it's like how I felt when I went snowboarding for the first time right I knew how to ski hey Scott and I never had been snowboarding before right put on the snowboard I got this it's easy wasn't so easy I fell I got hurt and so the thing with we decide to experiment with j j go on this Learning Journey but here's the thing as you're experimenting with this in your jobs right because we get to come to Docker con but then we have to go back and write code and hit our okrs right it has to be a journey we have to be okay with falling down the mountain yeah absolutely I mean so how how how do how do you think about structuring this journey for an organization as they you know from from the experimenting and falling down stage onwards yeah the the crawl walk run approach as we like to refer it and this is really true whether you just are experimenting with with generative Ai and llm tools or if you're an expert right I talked to many developers CTO across industry across company size across location and what I've done and if if you can walk away with this uh of this talk with like one or two kind of things I'm going to take this back to my job to my day then um I will have done my job here today but there really is when we're talking about safe experimentation there's three stages right the first is the the crawl stage the okay I'm going to do this on my spare time I'm going to be a sponge I'm going to come to events like do con and I'm just going to learn and and the really important thing at this stage is fail fast an experiment right and and learn and play and and discover the variety of what's going on I mean it's such a huge space the with with that point um the you have to know the rules of the game you're playing and in this particular case the rules of the game are changing every day yeah right so it's not it's confusing wait are we playing chess are we playing Monopoly well tomorrow we're playing a game that no one's invented yet right so you have to know the rules and then you have to be okay to fail to fall down to experiment right that's the easy part right it may seem hard but that's not the hard part the hard part is committing is the expansion part when you say I'm going Beyond um prompting chat GPT I might want to secure instance I might want to train a tool with my company's private data then what happens right then you have to get a commitment you have to get time you might have to set okrs against this thing and that's where that's where I'm seeing not so much developers but the stake holders get stuck right because remember they're like I don't know where I'm going but I want I want it get it to me I want to do it right and then once you show them this is what this tool this technology this process generative AI can bring to you by the way HR teams legal teams Finance teams are lining up saying can we use this to write our contracts to onboard employees now all of a sudden uh the stakeholders get stuck right and what I've seen is the most powerful way to unstuck them that everyone in this room and online can help with is we don't know all the answers we don't know what tooling we're going to use what powerful use case it is but how can we as technologists help our stakeholders find that compelling use case that despite how scary it is they they they they must move forward they're compelled to move forward right and then the last piece once you've done that that's the hardest part the hardest part is convincing your stakeholder to take a chance on something meaningful that's going to take time is as a software engineer as a developer what is your informed opinion on what good looks like for the use case that you've just gone and convinced your CEO to let you do right what what does good look like yeah right and so like that's for me the way to kind of process and we this is how I've been processing the shiny objects that I've held on to for 20 30 years and the difference now is um uh you wake up in the morning and there's new shiny objects you wake up in the morning and there's and your idea of what good is might change with the the shininess and the idea of what not knowing how how good you can make it where it could go is you know so you have to be constantly iterating and rethinking those choices as absolutely I was at I was at edtech uh week in New York City on Monday and uh someone from Google said if you're using out of the boox chat GPT you're not only behind but you're behind the students oh right those types of comments are being spoken now in October right and so it that what is good look like and what is your informed opinion this is that like two opposing things um are often true at the same time despite the rapid advances it's our job in this room and and the folks watching remotely to to have that focus and that's how we move forward that's how we make progress that's how we bring generative Ai and llms to use cases that we we don't even we aren't even thinking of yet but we will as a group right and so really exciting stuff and um and if you could just kind of bring this framework and the the last thing I really want to say is we have a couple minutes right this is this is not the end we all know this right please don't go this alone um you're here so that means you value learning right and I I really truly want to help you on this journey um and I mean this and this is a true this is a true reach out an olive branch to help if you have questions I want to be there for you um find me on LinkedIn um and and I will I will answer your questions right and I'll connect you with people that that have the answers to your questions I probably don't have them all right but let's use each other like we're doing today and thank you Justin for bringing us all together it's amazing I always learn things when I'm with you and it's always very a great it's always great to share and learn with you and it's always a great great experience for me and that's why I always love to be love to love to be with you in person again absolutely yes thank you so much everyone have a great day and thank you so much for having me take care thank thanks so much Debbie at Docker we've been taking these lessons to Heart ourselves we've been going through this learning experience about Ai and what it means for us um and exploring the the potential and you know we we've we've had to iterate an experiment and think about what it means so um you know after 10 years at Docker we spent you know time every day with our community docker's grown from a tool that was just an early adopter tool to a tool that millions and millions of people use every day and we've learned a huge amount during that time about what you find difficult what you find easy with Docker what you um you know what's difficult when you on day one when you turn up at your company and they say oh we're using this Docker thing today and you're like what's the docker um I remember having a friend coming to me and said my company started using Docker today and I and I remembered that you work there and I I don't know what it does and um you know it's like this is every day there are thousands and thousands of people going through this experience um and when people are more experience they still have problems working with doer there things that they that they can't remember how to do things that they can learn to do better so we wondered if we could share all this breadth of knowledge that we have about daa with you and we decided it by Building A Tool uh that would help you learn to use DOA more effectively and that we can grow with so um welcome you and to show us what what we've been building and uh bring this to you thanks youan thank you Justin um as Justin said my name is ywan I'm a machine learning engineer at docker I'm on the team working on this new Docker AI assistant um and just to give a brief overview the uh assistant is actually right now A vs code extension it will open an interactive notebook and that serves as the interface for the assistant it's a holistic integration between the developers project uh environment that serves as context for these new generative AI Technologies as well as a Knowledge Graph we're building off of Docker expertise so I'll actually be doing two demos this is the first one um say I'm a developer I'm working in this note project using npm and I've finished the application you know I'm testing it locally um everything looks good and now I have a requirement that I need to dockerize this project I need to put it in a container and maybe I've done this before I've written a Docker file but I'm not 100% confident I might have to go back review previous projects look up documentation but I do know my company now has this new Docker AI assistant um I've been onboarded to it and I know it's available from the vs code command pallet as this ask assistant uh option so I click on it a bunch of uh pre-populated questions come up and I see this how do I dockerize my project this is exactly what I want to do right I need to containerize this application and it pulls up this notebook um and in case you're not familiar with notebooks um this is based off of the Jupiter notebooks that are very popular in the data science ml space it provides interactive cells you can visualize data um and you can even document your process so let's go through this this is the recommendation Docker and nit um and the notebook pulls up this is actually an interactive terminal it sees that I'm in a no project use the default version of node again detected I'm using npm right all the defaults are correct here and then just put my server in and that's it you can see now in my project context I have all the necessary Docker files um I didn't have to go through and and write them myself um didn't have to remember all of the syntax and again from The Notebook I have the recommendation for my next step which is Docker compose up you can see again this is um pulling up a terminal so I can see all of the uh logs from the Bild and then now my container has started and the assistant is loading recommendation for me so here you can see that it's pulled up some information has the latest container um it has the ports it's running on as well as the image um and I can confirm this right I don't have to look up the actual uh container name I can just pull up the logs right away it's listing and Port 3000 um and so I've I've confirmed my application is working as expected in the docker container and again the the assistant has uh recommended as a reminder you know when I'm done Docker compose down and kind of close everything clean up my environment and so this this notebook and all of these commands have saved me valuable time because I don't have to go through and write these files by hand um and basically I'm done dock Rising I can push this now or continue on developing so that's the first workflow this is the second one um where I've actually inherited a project and aray has some Docker components um but maybe I'm I'm brand new to the project not again super familiar with Docker but I do have this assistant now um and since I'm I'm starting the project I say okay tell me tell me what this project is about right summarize it for me and let's see what the recommendation I get okay so it's telling me this is a node application it's containerized using Docker gives me some information about the docker file and tells me it's packaged with Docker for easy and consistent deployment and then I get this nice little recommendation if I want to get started with Docker I can just build it gives me a tag name and I see here there's an error um and I don't know how often this happens to you you're given a project you're told it's working and there's this error now and you're like do I bug the person who gave it to me and tell them maybe they made a mistake do I spend time Googling it and trying to figure it out well now you don't have to do either of those things um the assistant has actually identified that the error is in my Docker file and it pulls up the docker file in the notebook so this is actually a view onto the actual file if I make changes in the cell those will be reflected in the docker file and vice versa so I don't have to open the separate tab it's all in my notebook and here the docker AI assistant has actually highlighted the problem it says your uh ver the version of this node.js image you're trying to install is not supported anymore so I can go to the Quick Fix You can see it actually pulls up several options we recommend the docker AI option to update the package um you can see now no more error um and I want to emphasize that this is not just pulling up the latest image we've actually integrated with Docker Scout which everyone has been hearing about so it's not only the latest but it's also the most secure image um and so you don't have to do all of that work to double check your image you have the docker Scout right in the notebook right in your project so I can go back to the cell rebuild and great I don't have the air anymore and I didn't have to spend all of that time talking to uh you know slacking people or or Googling so now I get the next recommendation and it's telling me my project is missing a Docker ignore file um and maybe this is something I hadn't thought about or I wasn't fully aware of why that's important and so again I don't have to write it myself I can just run this command touch Docker ignore and says your file has been created please remember to check and save the contents you can see again um it's kind of prepopulated with uh common files that might uh clog up my build context and I can build again to get the smaller image um now of course this is a a demo project so it's not it doesn't have a lot of big files my image is only 249 megabytes but you can imagine if I were working in a really big project this could save me um a lot of space and so finally Docker run um and as this is running you can see again it's generated this container and I get information about the flags for example DD in detached mode dasp publishing to default ports um and especially if I'm new to ER this is all information I would have to Google or learn from uh a more senior Dev um but I have kind of all of this learning right in in my notebook um okay and uh finally it tells me similar to the last flow now you have your container you can verify your application you can see here everything is working as expected a nice little hello um and so the docker AI assistant has saved me as the developer valuable time right I could get started with a project I debugged the project modified my Docker file ran these commands with best practices all in a single notebook um and so our goal with Docker AI is to help you save time uh uh mitigate some of the challenges of Docker and give you back time to build your application so thank you very much that was pretty great wasn't it if you're excited about Docker AI uh sign up here for the Early Access program we're going to start rolling out access to people to um get it going but it'll be um you know sign up sign up sign up now and we'll we'll be really excited to onboard people get detailed feedback and um before it goes g um really really exciting it's been a it's been great F working on it and and seeing it come together and as you see it's like really really nice and friendly to use and and nicely embedded in your development environment well one more thing you know our laptops have been getting more and more powerful and we have a lot of people who are using you know the new Apple laptops they're great you know um but these machines have you know they have GPU use often and they have Hardware acceleration and things like that but it's actually very difficult we for developers to use these for you know building AIML applications locally in containers and we've been getting a lot of feedback from you that the whole the whole world of you know gpus and Hardware acceleration is just really hard and can we help make it easier um so one of the things we've been thinking about is like how to how to make it easier in the developer context on your laptop how can we use your GPU not just you know your CPUs for running Docker containers but your gpus as well now a while back um the the web browser Community started building a new technology for shipping GPU support in the browser called Web GPU um and is now actually shipping GA in Chrome and it's going to ship in Firefox soon um actually is kind of an interesting standard it's a really um you know it's a it's a modern graphic standard on the kind of design of things like metal it's designed for not just for rendering but also for doing GPU comput um and it also runs not only in the browser it's actually um you can run it in lots of different places um and it's it's very early stage but there's an ecosystem is starting to build around this for AIML apps as well the some early adopter projects we've been working with the burn rust AI project which is great cool project um and they've got they've got some support and Cloud Flair announced support is coming soon for workers so there's a there's a community starting to build around this now this is really early stage work this is very much a sneak preview um you know it's we're not going to show any really exciting demo but it's you know we're all Geeks we like little demos that show what's coming in the future we're really working you know in the open and showing sharing early and often with you so there's lots more to do here but I'd love to welcome um Pi on to show us where we are now with this hi everyone thanks Justin hi uh I'm Pat I'm a developer working on the back end of talker desktop day today but I wanted to talk about something that falls ever so slightly outside that box today so let's consider um I wanted to run an ml or a similarly characterized um workload in a container on my laptop so I'll start my um workload here um so in this demo that role is played by a very s simple program doing matrix multiplication it's basically for mag worth of twos multiplied by four Meg worth of threes and it gives you here uh the result and I time the computation so on the CPU graph there you can see that yeah my CPU is pretty much pined which may be good because you know I'm I'm using the computer as efficiently as possible but it's the the time is not great you we are pushing almost a second to do that uh two Matrix multiply um so yeah I could improve my implementation for the CPU because it's a naive multi Matrix multiply implementation but you know the we know the elephant in the room like we have devices that are designed to actually carry out those computations so we thought it would be really awesome if we could give access to those uh to you from within Docker desktop uh unfortunately it's not such a simple uh problem to solve but this is this is where web GPU comes to the rescue um so let's um consider my second example so yeah I rebuilt my program and now you can see it's running way faster by the output itself um the timings you can see are an order of magnitude better and on the graph there on on the CPU you can see that we are pretty nicely saturating uh the compute there um so now let's switch the slides um I'll take you a bit through what actually happened cool um so you can see uh in the command line we have that uh dash dash device um spe specifier uh this takes us um to CDI which is container device interface which basically tells uh the docker engine to amend uh the running container uh with uh several pieces of um environment which in our case is a web GPU client Library the headers that give you the web GPU API and the coms endpoint for uh desktop to talk to the web GPU run time could I have the next slide please cool so this is uh a picture of what's going on in the back in the back hand so you notice that I rebuilt my example there when I started the container with web GPU support what that did is basically it um linked my workload against uh the web GPU client Library um I could also dynamically load it and that web GPU client Library knows about a v socket that's integrated in Docker desktop behind which we have a web GPU runtime server running on my laptop basically giving me access to GPU compute on my laptop so yeah thanks for your attention and hope you're as excited about the possibilities this gives us in the future thanks thanks P that that was really great I mean as you can see you know it's um early work it's coming soon um there's more work to do to bring real applications to it but we're really really excited about this and um you know we just wanted to share it with you early because it was uh you know it's it's showing showing paths to using GPU on developer machines and it's really about bringing the kind of you know posibility and openness of Docker to many more places if you're interested in this comes to the docker lab space here to learn more talk to Pata we're going to be um talking more about this doing more work on it but um it's you know it's it's coming soon very exciting um you know lots more things you can do do with with doer so really really cool the Simplicity and power that Docker brought with Bill shair and run is really still what we need with AI and ml applications I think this is you know can see from the from what we've had today if anything the additional complexity and pace of changing these Stacks makes it even more vital we' we've been showing you stuff that's early in in development and while we're doing it because like this is the way that the AIML space is things are happening really fast we we can't like build things sort of out slowly and ship them later we've got to iterate fast work with you ship stuff to you early and often and um really experiment in New things and get feedback from you on what you think would be cool for us to build what kind of things there are you know we spent a lot of time talking to talking to people who are working in this space that people we've seen on stage here from all sorts of different spaces who are just excited by what we can bring and we're really committed to investing in this area to bring the magical Docker experience to whatever you're working on the future and all the kinds of things that happen in this explosion of innovation and and new new tech so it's it's a really exciting time we' really here to be a partner for you and to bring that developer experience the ease of use the learning the build chair run experience to all sorts of all sorts of new and exciting applications that you're building so it's a really really exciting time um finally I'm going to sit down and talk with another tool Builder um Eric from G cren is a is also building developer tools and we're going to talk about developer tools in the age of AI so welcome [Music] [Applause] Eric see I like your I like your T-shirt thanks hi everyone I'm Eric Modio I'm the CTO of git Kraken and creator of git lens we we Docker and git Kraken share a lot of you know DNA and what we do we think about problems in the same way we have the same goal of like maximizing the time developers spend on work and minimizing the time they spend just wasted on on the bits that don't count so I think that's you know that's why we we get on well exactly yeah I mean you know we're you know just we love developers we are developers and yeah we're just trying to empower them to you know be most effective and really spend the time innovating and in creating rather than you know all the extra tasks that we all have to do um and that's why I'm just like you know so excited about how what AI is going to bring to that yeah I mean what what's it going to change what's what what's going to be different about AI you know you've you've had a lot of experience already in this area yeah I mean it's been really interesting just seeing the journey on you know things that came out of that and you know how we started with AI when I originally you know would think about what AI would bring to the you know developer experience and that sort of thing and you know we we didn't think it would write code for us so that that that's definitely been an you know an interesting change um but I mean I think it you know it really just changes like the complexities that we can deal with the the context that it can provide and help accelerate us you know to use it effectively as a tool um and you know it's still really early days um you know back when I was at Microsoft working on the vs code team uh it was when GitHub was just incubating the co-pilot project so I was working really closely with the co-pilot team of how to get that into VSS code and you know the first versions of co-pilot were just you know comically bad like it just so unusable and it was like questioning whether or not like this could actually be something really powerful um but in really in a few short weeks to a month or so it really just started radically changing and it just kind of continued to get better and and it's really we've just been on that same extreme uh exponential growth since it started I mean exponential growth and like the uncertainty is just it can be scary like what what's going to happen to develop you know people people W wonder what's going to happen to their jobs and and how they work I mean how do how do you see that playing out yeah I mean I you know I you know I think you know we are moving to the Future where you know machines the AIS are going to write a lot more of the code um but I don't see that as you know an existential crisis for developers because you know really our our job is not to write code I mean when it comes down to it our job is to you know synthesize problems create Solutions you know innovate do other things to really you know solve whatever we're trying to tackle so solve business problems customer problems yes make the world make the world a better place yeah and yeah so I mean just like focus on that I is like you know we're really poised for that and you know one other thing too is LMS right now I mean this you know obviously will change over time but they're really not great at handing the handling the gray area um that the the inet the you know sort of trying to understand the problem space the Nuance around that you know moving requirements that sort of thing but they're really good at doing individual tasks so using them and leveraging them as tools to really accelerate what we do I think will bring you know much more power to uh what we can accomplish so we can accelerate the the iteration process the trial and error process the prototyping process all these all all these parts of the journey as well so we can actually end up building better tools in the end because we have we have more ability to try try different things out fast rather than going down one particular route exactly yeah I mean you can just you know instead of trying to think of like oh how do we get to the little step we can you know plow right through those steps and get to where we really want to with that ultimate solution and that Vision there um and you know it it's you know it's really really interesting of how fast that's evolving I mean the the first thing is you know it's it's been really surprising I mean what how do you see that playing out like what what does what what's next on the exponential curve yeah I mean predicting the future is always hard especially with uh you know exponential curves we're not good at at seeing that but yeah I mean you I think that we really need to start transitioning too from you know like a lot of this stuff you know AI tools are really focused on you know an individual or like you know increasing the productivity and you know as we start moving into teams um you know as our world you know you know over time we've been transitioning to a lot more where development is a is a team sport you know code bases are more complex the the things we're the problems we're trying to solve are more complex than ever and you know trying to you know help manage that complexity at the individual level is what you know we've been seeing a lot of but like you know how do we get teams to function better together you know to get a higher functioning team is not just to empower you know individuals to work more effectively it's to work more cohesively together to really affect the connections between them be more proactive share context you know there I think there's going to be a lot of innovation through that and that's you know something I want to see a lot more from to Tool Builders you know we're spending a lot of time make get Kraken you know thinking about that problem how we can sort of get teams share context better you know if you're you know communicate better is ultimately the the biggest challenge um to get over that hump and that you know I think AI is going to play a big role in sort of being able to handle and see a lot of the context you know through slack conversations through PRS through issues through all these different things and be able to surface up you know hey maybe you know did you know that this thing was happening and you we notice that you didn't have that context and sort of being much more active and proactive about that yeah absolutely I mean we we've we've been thinking about much the same sort of things about team collaboration I mean doc there's always been an important team role in Docker and that the share in Bel chair run was you know one of the kind of early kind of team focused experiences but I think that you know the role of the you know the role of the team the as you say the context in the team is really important that's again that's part of where Dr Scout came from for us was thinking about how do we bring a data model to the team to store the kind you know to store and the kind of data we want to give to the team to help help bring them you know a richer data plane to the team to get context in and out of the team and so that was part of where that thinking came from as well but I think that um you know so so yeah team collaboration is really really key to the Future of development but you know AI is a is more members of your team and it's not just a personal thing it's like you can think of it as being new team members with new capabilities and new maybe things they can't do as well but or or and you know as we saw from a lot of the demos earlier today like bringing the right context to the AI enables it to actually work incredibly much more effectively and then that's you know I think that's a lot of what we've learned about well models is that the context is what is you know it's kind of it's limiting factor in size sometimes but it's like it's it's where where you bringing the value into the into the AI Solutions exactly yeah I mean you know like humans are not really different if you know if you bring humans really good uh context you know we can do really incredible things uh so you know but you know context is something that is still challenging through through that is just getting the LMS to have enough context and you know be able to really understand some of that that space um but yeah like got to get them in the flow yeah and yeah and just you know thinking of them as you know leveraging that you know to be a part of that team to really enhance that context that everyone shares right to you know you know if we we're on a team and we're working and you know like if I can change what what I'm thinking about uh you know this PR I'm not going to get get to it you know review times like all of that sort of stuff if we can optimize how they you know the LMS can you know influence of just quantify things set context share that better um you know we can move far more effectively than we have in the past there's there's been a industry movement to kind of shift left to bring things to the developer to the inner loop how do you feel that kind of fits with the team collaboration aspect and and you know how do you see the future of shift left and yeah I mean I think you know just moving from you know the earlier we do things the earlier we collaborate you know in that sort of thing and you know that's one of the things to even just from that individual side getting you know in in the space of having that co-pilot you know it's such a perfect name uh with you as you're evolving uh really just is going to accelerate that time so if we can sort of share that context get things shared way earlier you know we're working on tools to make it easier for developers to share things to collaborate on them get feedback even have LMS in the process to get feedback and tighten up that Loop um you know the the the quicker we'll be able to to do that um but we've been talking a bunch about um opinionated versus non- opinionated tools and um we were talking the other day about you know the the pull request which you know it was one of the big Innovations of GitHub but it was it it sort of sometimes somehow people think it's unopinionated but it actually creates a very opinionated workflow and we're kind of seeing more kind of push back experimentation in in workflows and and get for example and other things and you know I think you've got you've got strong opinions about opinionation yeah yeah definitely I mean you know I think that's you know being opinionated and being you know the transition from active uh sorry from passive to active is something that our tools need to do a lot more uh is you know we really you know we've seen a lot of you know things are out there you know different great best best practices how we can sort of evolve Beyond um just providing a set of tools you know all the individual teams have to figure out how to piece together to like use them most effectively um so yeah I mean the pr really doesn't feel so opinionated but it's extremely opinionated for how to get code into you know into your project accept it review it all that sort of stuff and it really changed the workflows of teams of how they work together across that and you know there's pluses and minuses to that as well there's been challenges but just yeah like being able to be really opinionated I think one one of the benefits too with that LMS will provide is being able to see a lot more and actually manage and adjust and tune some of those opinionated workflows or or things that suggestions or actions and nudges that would be provided through the tools um to adapt in real time to see the context of your team but also provide tuning parameters to say you know like my team values shipping very quickly my team values code quality so like you know we have to have no bugs uh and like being able to you know slide those slider bars what is effective for your team and then adapt most appropriately so it's not like a one siiz fits-all opinion but really adaptive in flow but still you know lead lead the teams where you you want them to go to be you know work more effectively together I mean common common approaches across the industry rather than just kind of special special special design for every company has a has a huge sort of ecosystem advantage in for tooling for workflows for onboarding developers easily and like if but if you can have these workflows not be restrictive and not be annoying but flexible but make sure that there is a it's a flow in a workflow is really important I think exactly yeah and and just you know being able to you know nudge and be more active in that flow is just you know suggest to changes you know like you know there's there's a review that's been sitting hey you know the tools nudges you and says hey you know hey this thing has been sitting your teammates are waiting on you um or you know hey your teammate's out or your teammate snooze something and says I can't get to it so that you can you should assign a new reviewer to get someone to do that or quantify that you know this PR is probably only going to take a few minutes to review this one may take a long time maybe we even suggest at the beginning that you know hey the the person submitting the pr should be breaking that down to make it easier to review on their teammates um so I think there's just a lot in that you know to help that flow and yeah better teammates absolutely I mean yeah we've been thinking about that a lot with talking again with do Scout about like how to you know how to set overall targets for where we want the comp you know where you want the organization to be and then how to deliver that effectively through the flows that developers have to do without like you know you don't want to alert when you're doing something else you know those kinds of work flow are just terrible but very Contex aware yeah it's got to be context aware it's got to understand like like and work out like which of these tasks are easy quick and which ones are difficult which is something like again with pull request review like someone gives you five things to review and three of them take three minutes and then one of them is takes half a day and it's like you need to know that in advance so you can plan don't pick the wrong one and the rest are all gated behind it yeah exactly um so to to kind of wrap up you know yeah I mean you know I think one of the things is just you know like was you know mentioned a lot of the things we've seen here today is just like you know we really need to embrace llms and and sort of the AI going forward you know it's a it's a very effective tool and you know we're very effective tool users uh so you know take advantage of it don't be afraid of it uh you know really just lean into how you know we can leverage these tools more effectively and and shift I mean one of the challenges too is just getting from thinking your job is to write code to really solving problems and the more we move into the solving problems part the more secure your future and everything we do will be cool thank you so much for having me and yeah we're we're back at super excited to have Justin joining us at gcon in a few weeks and for his forward not so thanks so much for so much thank you thanks well enjoy day two of Docker con so much to learn so much to discover so much to play with so many new toys shiny things um really looking forward to seeing you all around um and so have a have a really really great [Applause] [Music] day
Info
Channel: Docker
Views: 7,881
Rating: undefined out of 5
Keywords: Docker, devops, app development, software engineering, containers, information technology, Machine learning, AI, MLOps, DevOps, learn docker, what is docker, docker for beginners, docker images, docker container, what is docker container, docker hub, programming, cloud computing, software development life cycle, sdlc, docker tutorial, docker desktop, docker compose, Generative AI, deep learning, decision tree machine learning, justin cormack, justin cormack docker
Id: yPuhGtJT55o
Channel Id: undefined
Length: 82min 20sec (4940 seconds)
Published: Wed Nov 01 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.