Hi guys! I'm sure you have all heard of ChatGPT
by now. It has become a buzzword within days of its release and professionals in all fields,
especially in high skilled areas like lawyers, doctors, engineers are questioning whether
such AI can actually replace them and work. So in this video I want to talk about what
Chat GPT is and how it even popped up, talk a bit about the organization behind GPT
called "OpenAI", which has already created many other machine learning models besides Chat
GPT and also explain technically about all that. And then we'll dive in and actually put chat
GPT to use for some DevOps related tasks. I really want to see how it can help in generating
configuration code for building DevOps processes or different parts of those processes and how
well it knows different DevOps technologies, but not just some shallow examples or boilerplate
code that I can get from official documentation, but instead also try more fine-tuning and
small optimizations in that configuration code. So we will see all of that. We're also
going to check out an open source command line tool that is built on top of chatgpt and was
specifically created for engineers to generate infrastructure as code templates and more and
finally we'll talk about the impact of ChatGPT, the quality and usefulness of such a tool for
engineers and whether it will really replace the engineers and to what extent you should
be concerned. So let's talk about all of that! First of all, what is chat GPT and why is it
useful? You can think of chat GPT as something that has all the knowledge from various different
fields including engineering and has a chat interface, where you can ask it to give you some
information based on that knowledge in an easily digestible form. And it will analyze and process
all that knowledge it has, to give you answers in a very human-like manner, as if an actual human
professional was writing back. Now how did this technology even came about? Who created this? Chat
GPT was created by a research organization called "OpenAI" and according to company itself and its
mission statement, it is actually dedicated to developing and using artificial intelligence in
a way that can benefit the general public and basically democratize the access to artificial
intelligence as well. So the story is that its founders among which were Elon Musk and Sam
Altman founded open AI, because they feared that people and organizations would misuse artificial
intelligence or be careless about the development and advancements in AI and they feared or still
do probably that this will cause a chaos and disaster for the world and through open AI they
wanted to develop and use AI for the benefit of general public. So that was the idea behind open
AI organization, which is non-profit, research organization. And open AI actually has done really
impressive developments in the AI field. It is one of the leading organizations in the AI technology
area and it already has many very ambitious projects, including the development of AI for use
in natural language processing, computer vision, robotics, gaming and so on. And there are some
popular high profile projects that open AI has developed over the past years, one of them is
"DALL-E" for example, which is a neural network, which basically is one specific type of machine
learning model that mimics a human brain, so that's what a neural network is. And DALL-E
also became pretty famous, because it is really impressive and it can create super realistic and
very high quality images and art from a simple text description. And another of the projects
a language generation model that can generate human-like text called generative pre-trained
Transformer or shortly "GPT", which is exactly the model that famous chat GPT is based on. So
that's basically one of several other projects that open AI has created and they actually made
several improvements of GPT after its initial development with GPT-2 and extension of the GPT
model that was trained on an even larger data set of web pages and GPT-2 has the ability to
generate a wider range of texts including news articles whole stories and poems and then even
more powerful version was developed which was GPT 3 which can even produce jokes puns and has
even wider range of usages including for language translation question answering text summarization
and content creation so this powerful GPT 3 Model was given a human friendly user interface which
is a chat and that's how we got GPT which open AI released and made available for general public
and it has seen tremendous explosion in number of users within just days of its release now
you probably have already seen some videos on YouTube of software developers demoing some use
cases of how ChatGPT produces really good code in any programming language or framework
or how it even fixes bugs when provided a code snippet with certain bug and now along with
millions of users who have already tried chat GPT we want to put it to test and see how it performs
but in our case specifically for devops tasks so the question is can we get really useful output
from it and get scripts and configuration code for different devops tasks and basically how
qualitative will be the code or scripts for devops tasks that chat GPT is going to produce
and that's exactly what we're gonna see in the next section and with this let's Dive Right In
the demo part and try out chat GPT for devops so let's start by opening the chat GPT site and as
you see it's on openai.com so let's click inside try checked GPT and this is basically the URL
where it's available it's chat.openai.com and if you don't have an account yet so for the
first time you have to basically sign up and create your account and
confirm it from your email address and once you've signed up
then you can just go to login and you'll be forwarded to the chat GPT dashboard
and that's basically how it looks like they may change the UI a little bit in the future but
it's a simple user interface and that's also one of the reasons for its popularity because
it's super simple and user friendly to use so this is basically the starting prompt where you
can start a conversation and ask you questions to ChatGPT and that's what we're gonna start
from so I'm gonna do this from a perspective of a let's say a junior devops engineer who has a
vague idea of what they're doing but once you use this tool to get the work done most efficiently so
basically use it for research and learning purpose as well as for actually getting some proper output
that they can use at work to implement some devops tasks and let's see if ChatGPT can help here and
here you have a couple of additional information about chat GPT like examples of how to interact
with it how to structure your questions as well as the capabilities one of the most important
capabilities that I think is most valuable that chat GPT has is actually staying in the context
of the conversation so once you ask questions you can actually do follow-up questions so
it remembers the previous context within that chat which is super valuable because you
don't have to explain everything from scratch with every question and they also point out some
of the limitations one of the limitations that you have to consider for sure is that the data
that has been fed to chat GPT is right now at this point till 2021 which means everything that
happened afterwards chat GPT doesn't know about so as a first question let's say I have
a JavaScript application with node.js framework and I already have the code for the
application and I want to dockerize it so I want to create a Docker file for my node.js
application and I'm going to ask chatgpt to actually give me a Docker file an example Docker
file that I can use for my node.js project so simple instruction write a Docker file for
node.js application let's see what happens so let's actually see what chat GPT gave us here
first of all it gave us an example Dockerfile for node.js application so it based it on node image
with a specific version which is actually very good practice to have a fixed version of a base
image but it didn't just write as a Dockerfile, but it actually gave us explanation of each step
in the docker file so again from a perspective of a junior engineer that is kind of researching
and doing the job at the same time this is super helpful because not only do you just get a ready
Dockerfile but it also explains to you what is happening on each lens so you can use it to learn
in case you don't understand parts of the syntax so it describes all the steps here like setting
the working directory copying the file installing dependencies and so on and it doesn't even stop
there yet actually gives you some additional useful information like making sure that you have
package.json in your application and that that package.json has a script called npm start inside
and once you have the docker file that it provides you then you can build the docker image using this
command giving your image a name and then it gives you a command to start a container from the docker
image so basically all the instructions are there for the next steps as well so it didn't just give
us what we asked for but it actually gave us more information for the next steps which is super
impressive now let's assume the description of one of the lines here is not sufficient so you
want to understand in more detail something in the docker file let's say you want to understand
exactly what this work tier directive does in Dockerfile so we can ask it explain exactly
what Docker dear directive means in Dockerfile and this is a really detailed explanation and what
I actually love about this is that it explains the concept or this directive specifically with an
example like it actually gives you another example of dockerfile and says look in this example
we are setting the work directory at slash app and then when we execute npm install and copy
commands they will actually be executed in that slash app directory and then it also gives you a
comparison to something that you may already know like CDE command in shell script so basically
changing into a directory and then executing commands from there so I think that explanation
is actually very good it's in simple language and most importantly it gives you examples and even
comparison so that you can understand it better now let's say we want ChatGPT to actually adjust
the docker file that it gave us and instead of npm use another build and packaging tool
called yarn which is an alternative to npm and let's say we're using yarn
in our project so we want our Docker file to also be using yarn so I'm going to
instruct ChatGPT to use yarn instead of npm and as you see it actually replaced the npm with
yarn and it gave us a new Docker file with yarn commands inside and again it tells you to have
package.json and have yarn start script inside and the docker build and Docker one commits now
let's ask it to do even more optimizations in the docker file and right here on this line
it is actually copying everything from the projects directory to this Docker image right so
let's say in the root directory of application we actually have lots of other files right you
have git ignore node modules maybe tests folder so you have a lot of code that you don't need
in the docker image and we don't want to copy all of that inside the docker image right so I'm
going to ask ChatGPT to actually optimize this and only copy the relevant application files now
obviously ChatGPT doesn't know what I have in the application code so I'm really curious to see how
it actually solves this so I'm going to ask it to only copy relevant files or relevant application
files not everything to the app image and here we have the output from chatgpt and
I actually think that it handled the task really well considering that it doesn't
even know what I have in the application so basically what it suggested us to do is create
this dot dockerignore file it even explains that it is similar to git ignore file where you
can specify patterns of files or directories so basically anything that you want to ignore
and exclude from ending up in the docker image and then it also gives us example with actually a
really good realistic examples of excluding node modules and test directory if you have them
in your application and creating this Docker ignore file with these contents and then it also
adjusted the docker file with this additional flag to exclude whatever is in dockerignore it took the
version with npm and not yarn so at this point you can actually ask it to rewrite the whole thing
in yarn again but I think the result is actually really good because using Docker ignore file
is also one of the best practices to keep your images smaller so I think the result is actually
really good and now let's do one more optimization let's say again as a junior engineer you have
heard of this new concept in Docker which is a "multi-stage build", but you have no idea how to
actually create a multi-stage build or how the syntax of that would look like in Docker file
so you're going to ask chat GPT to do that for you so we're going to say use multi-stage
built and let's see what it comes up with and let's see what we got here so first of all
it gives you a brief explanation of how to build a multi-stage build to use multiple froms and
that each from statement starts a new stage in the build process you can use the files or output
from the previous stage in the next stage right using this copy from directive and of course we
get an example Docker file with two stages right we have this from node and then from nginx
in the first stage it actually builds the whole application with its dependencies
and the code that it's copying into the image and then in the next stage it takes
that built artifact and runs it with nginx server and expose these nginx on Port 80
instead of Port 3000 that we had before and then it gives you an updated command to run
the image and bind it on Port 80. so I think it did a pretty good job but again it didn't use
yarn as well as that flag on the copy directive to exclude files from Docker ignore file and
finally in the examples I also wanted to build an image with a specific Tech so let's actually
direct chat GPT to create the multi-stage build but with those optimizations so adjust
the multi-stage build to use yarn exclude app files from dockerignore file
when copying them into Docker image and provide Docker command examples with a
specific image tag of let's say 1.0 [Applause] I have no idea if it's gonna get all of these
things right so let's actually see the result and I think it actually got all my requests right
so let's see it replaced the npm commands with yarn commands which is good it also edited back
these exclude from and it added the version tag to the docker command examples so we have my node
app 1.0 in docker build and docker run commands so generally it was actually really good at
understanding my requests and adjusting the details accordingly awesome so I think it did
pretty well with Docker now let's see another example with another tool. Now before moving on,
I want to give a shout out to the sponsor of this video "Firefly", which can turn any cloud resource
you have into infrastructure as code trying to control your Cloud as the cloud continues to grow
it becomes increasingly difficult to see the full picture and though your guard is always up the
landscape is totally fragmented and surprises lurk around any corner. Why stay in the dark?
Light up your cloud with firefly analyze and identify any unmanaged Resources with just a
few clicks a firefly automatically codifies any unmanaged assets into any infrastructure
as code tool keeping your Cloud up to code unexpected changes causing your Cloud to drift
away get real-time notifications on any drift and remediate them as they occur obtain a full
overview of your entire Cloud footprint across multi-cloud various infrastructure as code tools
and kubernetes clusters prevent system failure replace manual work with automation to reduce toil
and reclaim control over your cloud with firefly so be sure to check them out you can get
started with their free tier offer they also recently launched a report on the "state
of infrastructure as code 2023". As usual I will leave all the relevant links in the video
description and now let's continue with our video and this time I'm going to use kubernetes
and I'm actually going to stay in the same chat because we already have a
Docker file here and some example um node.js application so I'm going to use this
context to create a kubernetes deployment file for this application so I'm gonna ask it to create
a kubernetes deployment manifest or this image okay so we got the output from check GPT and
it actually looks really impressive uh because of couple of reasons first of all let's go
all the way up and see our deployment file and since I told it to base that kubernetes
manifest file on this Docker image that we created it actually took that information and
reused the same application name as when building this Docker image but it also reuse the image tag
1.0 and the container Port from this multi-stage build right where we're exposing this port 80. so
it used all this information from Docker file to create the deployment manifest file now this
is obviously a boilerplate code that you can easily grab on kubernetes official documentation
page but this can actually be a very convenient way to generate these boilerplate manifest files
because you can reuse some of the contacts that you have and ask it to it readjust and so on and
another really impressive thing that it did is it just went ahead and just independently gave us an
example of a corresponding service manifest file so this is a service component and use the load
balancer type which is actually the type of service you should use in production environment
so if you use the correct type it has the selector from the labels provided here and again you use
the Target Port from the deployment file and then again it provided me with Cube CTL commands for
creating or applying the deployment and service files and even querying the service after it
gets created you will probably get different results for each query or each request that
I give chat GPT it will be something similar obviously but some details may vary right so for
example you may have gotten the the docker file example with yarn as a default instead of npm so
it could be these kind of differences now let's continue with that and I'm gonna ask ChatGPT to
actually adjust the boilerplate deployment file and by the way it also gave us three replicas
instead of one which is pretty cool so I'm going to ask ChatGPT to adjust these deployment file
and add some resource limitations to our container so Ed resource quotas to the deployment Awesome, so let's see the output it basically just
edit the resource limits and request configuration in the deployment file and it used the most
standard default values for each configuration which is pretty good actually and here is the
obvious advantage over just going to the official documentation and getting those examples from
there because when you want to add this kind of configuration first of all you have to go and
search the the example syntax then you have to remember exactly the right location inside your
deployment file to insert that configuration so ChatGPT actually just does all of that for you so
you don't have to memorize the syntax of the many as well so I think that is actually a really big
Advantage especially when you're adding lots of configuration here and when you're working with
deployment files there are hundreds of lines long right maybe with multiple containers and its
configurations so I can imagine this being super useful in those cases and again instead of just
providing the example it actually went ahead and also explained what resource limits and requests
are right that the limits are used to prevent a single container from using too many resources and
causing issue on the host so I think these details where chat GPT just goes one step further and
instead of just delivering you the exact result to your query it actually gives you even more
value explaining conceptually the example and also giving you some follow-up comments that you
may need to use so you can just copy the code and execute on your terminal instead of typing out
so I think these details are actually pretty cool now I want to try out one more thing
which is instead of me just saying please add resource quotas or some specific
configuration I want to give chatgpt a more generic request so that it can handle
the underlying details for me so let's say I don't know exactly what the most ideal
deployment manifest looks like so I'm gonna ask ChatGPT to take over that task for me so
I'm going to ask it to adjust the deployment manifest with production and security best
practices so I'm not giving a specific instruction to insert a configuration instead I'm
saying whatever the production and security best practices are please just add them in the Manifest
file so let's actually see what it comes up with all right so let's see the output here first of
all it started out by listing all things that are best practices and production practices for
deployment file and this list is actually really good so it says that resource limits and requests
should be configured the liveness and Readiness probes so that if container isn't working it's
automatically detected and restarted Etc not to hard code any passwords or tokens inside as
a security best practice and also to consider access permissions right so who can do what in the
cluster and so on however the deployment file was not actually adjusted and plus there are a couple
of other things that you could have mentioned as a security best practice for example so what
I'm going to do is I'm going to repeat my Quest and let's see what happens so first I'm going to
say it seems like some best practices are missing so please adjust deployment file properly
with production and security best practices without explanations just provide the
example manifest file let's see what happens okay so this looks way better than before and
there is one thing that I also noticed when I was playing around with chat GPT is that
sometimes it basically starts giving you the answer and it kind of stops at some point
I think there is sometimes a limitation to the output so that's why I told it to basically
spare me the explanation and just give me the example so it has enough space for the answer so
it actually could be that it just stops Midway and you can just say keep explaining or continue
with the response or whatever so let's actually take a look at the adjusted deployment file and
after this resource configurations it actually added the liveness probe the Readiness probe
which are the best practice configurations if you want to automatically detect when
container is working Etc it also added the image pull Secrets when you're pulling the image
from a private repository which is super nice because all this configuration again you have
to look up the syntax because I don't think you can remember all these attribute names and key
value pairs by heart and exactly where they go so it's really nice that it basically just puts
all that together for you it also added volumes for secret and config map and also edit the
service economy so basically all the things that it listed above is production security best
practices like using liveness and redness probes the resource limited Secrets roles Etc it added
all this configuration in the deployment file which is awesome I'm gonna ask you to do one more
configuration and basically just add it on top of this example and let's see whether you can do
that so on top of that new configuration options also add security context configuration
in the deployment but so let's see and it actually added the security context
configuration on top of the previous deployment file and this is basically a configuration where
we're saying that container should run as a user that is not root right so any user which is
not root and that's also kind of one of the security best practices not to run containers
as root one of the best practices so even if the container was built to run with root
user with security contacts it is actually overriding that configuration to avoid that
security risk and here it says as well run as non-root true so this deployment file actually
looks pretty good and of course when you start using it and you again from a junior engineer's
perspective you don't know exactly how to create the other components which are referenced here
you can keep asking chat GPT okay now how do I create this secret or how do I create the
service account or configure the volumes Etc so basically you can put together the rest of
the configuration around this deployment file but again my general impression is that the output is
actually pretty good considering the nuances and details however sometimes you actually need
to have the knowledge yourself to validate the output because again if you are a junior
engineer and you have this more or less vague idea and you don't know all the details it could
be difficult to validate the output or in some cases to formulate the request properly to get the
high quality output now I'm going to try one more thing which is a little bit more complex and I'm
really curious to see how ChatGPT can handle that and I'm actually gonna stay in the same chat
to reuse some of the context and I'm gonna use chat GPT to actually build a cicd pipeline code
in Jenkins so after a couple of queries to chat GPT we should actually end up with a Jenkins file
which has the complete cicd pipeline code or at least the the main part of it configured and
note that I want it to reuse the context that it already has from this chat like the kubernetes
files Docker file our node.js application and so on so I'm gonna ask now to write a Jenkins
file for the complete it's cicd pipeline for the above node.js application including
deployment to kubernetes Cluster and let's see all right now let's see the output I believe that
the output is not always the same for everybody so you may actually be seeing some completely
different results it's actually interesting to compare and see how many different options
or versions ChatGPT comes up with for the same requests. So let's actually check out our CI/CD
pipeline first of all it actually reused the yarn instead of npm for build and test stages which is
pretty good so it's building the application and it's running the test considering there are some
tests in the application then it does build Docker image and push Docker image stages separately
so I guess in some instances this stage will be put together instead of separately and this is
also very interesting that it's all automatically decided to use the build number environment
variable from Jenkins as the image tag right so this is actually very good because it makes sure
that a unique image with a unique tag is generated every time the build runs right so that's actually
pretty good that we don't have a hard-coded image tag like 1.0 and then this is another interesting
part in deploy to kubernetes Stage it actually knows that kubectl needs to use the kubeconfig to
connect to the cluster and to authenticate with the cluster so this kubeconfig file will include
all the credentials to connect to the cluster which is a sensitive information and because of
that it assumed that it should come from Jenkins credentials so I think this part is actually
really impressive that it automatically knows that this is a sensitive or secret information
and it should be in a Jenkins credential and then automatically comes up with just some credential
ID that you can now use to create that credential and gives you the syntax to read from the rankings
credential is kubeconfig file and basically just executes the command with that configuration so
I think the result is actually not bad there are just a couple of things that are missing here
which is um first of all "docker login" command to log into a repository before we can actually
push the image there which should actually happen before and also this is only updating the image
in the deployment in the cluster but is not applying the whole deployment file or service
file so again we can give it some additional instructions to adjust this Pipeline with all
these additional steps so I'm going to ask it to and just the Jenkins file to have one stage
for building and pushing the docker image and log in logging in to the docker
Repository that's a on Docker hub and in the final stage apply deployment
and service files to the cluster but keep the cube config parameter let's see okay so the result does not look exactly what
I was looking for which could also mean that my prompt my request wasn't properly formulated
so it adjusted the deploy to kubernetes part with write configuration so it's deploying the
deployment or it's applying the deployment and service file changes but it kept this cubeconfig
file and here you basically just put everything together so I'm gonna ask it to change the stage
because this obviously doesn't look very good so so keep the build and test
stages as they were initially but create a separate stage for building a
Docker image and pushing to Docker Repository but before pushing image make sure to log
in to the docker Repository first let's see okay now this looks better we still have those
build and test stages and here it is building the image then logging in to Docker Hub again using
the credential or assuming that because it's a password it should get it from the credentials
from Jenkins credentials and using it here and this is also actually a good practice to
use password standard in option instead of passing it as a password a flag directly
like the username it could have also gotten the username from credentials like the password
but I didn't and it also thin add the kubernetes deployment from the previous example so I'm going
to ask it to add deployment deploy to kubernetes stage um as in the previous example also read occur hub username as a credential
just like the docker have password but call them Docker user and
Docker pass PWD respectively and also use the credentials function
instead of with credentials to read both of these values so I actually put a
couple of directives to adjust some details in the Jenkins file and I'm really
curious to see what it comes up with now okay so build and test look fine so I think it got most of my instructions right
first of all it changed the variable values to Docker user and Docker PWD which is great and now
it's reading the user also from the credential so it has this Docker Hub credentials and it reads
both values from that credential and again we have this Docker login command and then Docker push and
it added the kubernetes deployment stage also in the pipeline code one thing that it didn't do is
replacing these with credentials with credentials function so this is kind of the base of Jenkins
file or cicd pipeline that you can then build on top of and I think the result is actually not very
bad again I assume that different people may get different results on the same request or query
so your Jenkins file may actually look a little bit different but of course you would have to do
some optimizations here right so for example the deployment file that we let chatgpt generate for
us has this hard-coded image tag inside 1.0 so of course you would have to dynamically set that
or adjust that to be my node app build number whatever the value is instead of hard-coded 1.0
so that could be one of the optimizations you could add deployment to multiple stages
like development testing production Etc so you can use this as a foundation to
extent I'm going to do one final thing here and ask ChatGPT to add another stage for a select
notification about the pipeline build status so add a step to notify stage to notify team
through slack Channel about the build status um so this is an example that it gave us a notify
slick stage with a step that basically executes these select send function for a success
message right and it also tells us how slick notification plugin should be configured
with the link so that's actually pretty good um and this is what I mentioned where sometimes
just in the middle of reply it just stops so it kind of gave us half of that Jenkins file um but
it suggested us to put that as a final step after deploy to kubernetes one thing that I'm missing
here is actually the failure notification so this one actually sends just success message
but it should also send a message when build failed right so I'm going to instruct it to
consider both cases so select notification should be sent either for failure
or success and should always execute after the pipeline or after the build
is finished as the last step and let's see okay so this time it actually provided better
results than previously first of all it's using post block which is executed after all stages
have completed regardless of success or failure so that's what we need and then it basically checks
whether the result was success in which case it sends built succeeded otherwise build failed it
could have used the success and failure blocks would have been a little bit cleaner and easier
but this looks better than the previous example and then again it stopped just midway so I can
actually tell it to continue with your response and e is going to give us rest of the
Jenkins file with the post block edit so it provided us with the Jenkins file
with this Slack notification Step at the end however I actually forgot to put
that whole thing in a separate stage after the deploy to kubernetes Stage so overall
impression for this specific task for building the CI/CD pipeline it wasn't actually as good so
you definitely need a lot of knowledge yourself to build a cicd pipeline because you can't 100 rely
on the results that chat GPT gives you however it is definitely helpful in building like the base
syntax and configuration that you can then kind of optimize on and I think the most value you
can get here is when you actually know what the pipeline should end up with but you don't have
the exact syntax in mind or you don't know exactly the plugins that are available for a specific
task you can actually put together a pretty good pipeline code with specific requests so it
could be valuable in those scenarios and finally as a very very last step I actually want to try
out one more thing which is taking this whole um Jenkins file and I'm actually going to leave
out this select notification part so Jenkins file without this last step and I'm going to
ask ChatGPT to give me a GitLab CI equivalent of this Jenkins file I'm gonna paste it in
and I think I am actually missing the final um curly brace here so I'm
going to edit there you go and so basically it should give me the CI
CD pipeline configured in Jenkins file but for gitlab CI so with gitlab CI syntax and I'm
going to execute and let's see what it gives us all right so I think this time ChatGPT actually
did a really good job of converting our Jenkins file to GitLab ci.yaml file it gives us some
immediate differences as an explanation here and this file actually looks pretty good so we
have the stages the stages that we had in Jenkins file however here it's mixed up the stages or
the configuration for stages a little bit and it basically put a yarn install and yarn build in
build and push stage so these two stages basically were ignored but some of the configuration is
pretty good so for example it is using before script correctly where it's logging into docker
before it builds and pushes the image it also detected that it needed an execution environment
with Docker in it to execute those commands um and in the deploy stage again it uses
before script to configure the kubeconfig location so in this dot kube folder so again
you will definitely need to do some adjustments and optimizations here but I think as a base
configuration file especially considering that it was generated or mapped from the Jenkins
file directly I think it's a pretty good base configuration to build on so these were some of
the examples that I thought would be realistic to use chat GPT for some devops tasks especially
from a perspective of someone who is learning and doesn't know these Technologies very well
like examples of how they can use jbd to make their work more efficient there is definitely
some room for improvement in terms of accuracy because you also have to be able to validate the
output and can't 100 rely on it when it doesn't give you accurate results but for a little
bit more experienced Engineers I think this is a really good way to become more productive to
save a lot of time especially in generating these base configuration or boilerplate configuration
that you can then optimize and build on top of and since you are able to also validate that
result I think it can save you a lot of time and it can be a lot of help in scripting and
writing this configuration file is code or the infrastructure is code and all these automation
code basically so you don't have to memorize the syntax and configuration and all the details so I
think this could be a really helpful tool in that now as I mentioned chat GPT is one of the projects
of open Ai and openai actually provides an API which is open source and anyone can build on
top of that so there are many companies or individual developers out there who actually
create models based on open ai's API so they reuse all these resources that open AI provides
and all these trained models basically and they kind of optimize on top of that like providing a
better UI and user-friendly experience or training the model further for a specific use case like
you have a tool based on open ai's technology but specifically for answering legal questions or
a tool that lets you do some specific engineering tasks and so on so the idea is instead of
having a general purpose tool that does pretty much everything you have a tool that is
trained for a specific use case days or set of tasks and it does that one thing really well and
as a perfect example of that Firefly the company who sponsored this video actually created an
open source CLI tool based on chat GPT model that specifically works for infrastructure as code
generation and it's called aiac so basically it's an open source command line tool that lets you
generate infrastructure as code templates scripts configuration code utilities queries whatever with
a simple command line comment so basically all the tasks that we just did with chat gbt so let's
actually see that in action and how useful can it be when working on devops tasks you can install
it with a simple Brew command or even run it as a Docker container if you want I have already
installed it you need to also generate an API key on open AI platform itself and provide that API
key through environment variable when executing the aiac commands in order to authenticate
with open Ai and it's actually a very simple straightforward method it probably took me two
minutes to install and set the whole thing up and I'll provide the link to the guide in the video
description and once you're all set up now we can go ahead and use it to generate some configuration
files for us so I just executed a simple Brew install command and then basically just set
the environment variable for openai API key to the API key that I just generated on their site
and once you have all of that set up we're ready to go and actually generate some infrastructure
as code scripts manifests any devops configuration files so just like in our previous example let's
ask aiac to generate a Docker file for node.js application and the command for that is AI AC
get and then we're going to say Docker file or node.js application and let's see what we get so this is our Docker file it uses npm it
basically installs all the dependencies then copies all the files and starts
the application using npm script and now this is actually a very useful part
where it asks you to either regenerate the result so you can ask it to give you a
different example so we can do R for retry and it's gonna try to generate a code example
again so this time it gave us a little bit different configuration so for example it
took an Alpine version of the base image which is probably going to be smaller in size
it also used a different work directory it is now also copying package log Json and so
on so it actually gave us a different code example and a very convenient thing you can do
with it is actually save the results whatever was output in the console you can save it
directly into a file so I'm going to do s so save and it's going to ask in which file
should I save it so we can tell it to create a Docker file and save the output there and now we
should have a Docker file that was created with that code example so that's how you can use the
tool now let's try another example where we ask aiac to generate some example a terraform script
for creating an ec2 instance again AI AC gets this time terraform for ec2 instance
or let's say for two ec2 instances so we got this pretty basic terraform
script for defining these two resources again we can do retry and let's see
what other template it comes up with and this time it gave us a little bit different
example with the AWS provider configuration for providing your AWS credentials and defining your
region and also instead of having two separate resources it basically just edit account attribute
here to avoid some code redundancy again if you are unhappy with the results you can do retry as
many times as you want and once you're happy with the configuration you can save it directly into a
file so I'm going to do s and let's say I want to save it into main.tf file and if we check main.tf
we should have that code example in the file so that's basically how you can use this tool to help
you in generating some boilerplate code to give it some commands for different Technologies for
Docker terraform ansible kubernetes manifest files whatever we need in our devops tasks and use it as
your small CLI devops assistant basically so apart from the fact that you can use it on command line
interface I find Convenient that you don't get any a needy text explanations with every output so
check GPD explains the examples which is great for Learning and understanding the output and you
could actually tell ChatGPT to not give you an explanation it's just the code but here you just
get the code snippet that you asked for without any text around it and you can then just directly
save it into a file just a convenient thing now you can go ahead and install and play around
with this tool yourself if you want I will as always leave all the relevant links in the video
description below so make sure to check that out so overall what do I think about ChatGPT will it
replace Engineers or engineering jobs threatened and is everything I said in my latest video
about getting into it still relevant I remember with the first advancements of AI people were
saying that AI can do certain things better than humans and replace the boring repetitive
tedious and less creative tasks that humans don't want to do anyways like memorizing stuff
calculating researching and analyzing tons of data and so on so humans would be free to do more
thought-provoking and creative tasks but now we're seeing AI doing those human feed tasks pretty good
as well AI can do exactly those creative complex tasks like creating digital art images and videos
often better than humans and I mentioned this project Dali one of open ai's models which can
do exactly that also other creative things like troubleshooting an issue fixing a code writing up
a legal document or writing a whole creative story or article however one thing that hasn't changed
yet is that AI like ChatGPT still needs to be used by humans and that's the whole point right using
AI so humans can be more productive so the fact that AI can do certain tasks better than humans
does not necessarily mean that you need less Engineers it means that the engineers will become
more efficient so the same number of Engineers can do more stuff faster so it accelerates the
growth and speed of development which obviously every company wants to have right no I believe
what actually is threatened by chenshipity is the pretty outdated educational system throughout
the world and the problem is that the modern educational system even the higher education like
universities focus more on teaching to memorize stuff and things which humans are weaker at than
a computer learning things from books research results things that have happened and other people
have done like lawyers memorizing legal texts and instead they focus Less on teaching the creative
and analytic independent thinking or even teaching to use AI tools to do more creative stuff so
professions like tax advisors lawyers graphic designers especially ones doing more more or
less standard work can be very well replaced by a better cheaper and faster AI so in those
fields we may have only the top players who still generate value over AI by doing more
complex work and they will be using AI to do their job at a higher level and that's what I
personally think a logical development of things may look like for these professions and this means
we need to start working on a high level and start critical thinking problem solving and how to
use AI to solve actual problems and generate value instead of Simply carrying out the tasks
that AI can do better than us we need to be ones envisioning and planning what needs to be done
now in terms of Engineers I believe that AI will not be able to fill the gap of deficit that
still exists for different engineering roles because the IT industry is the fastest developing
one with more new Fields being added each field itself expanding and encompassing more skills
think about some of the I.T professions like blockchain developer or machine learning engineer
data scientists how new these professionals are compared to some traditional professions like they
didn't even exist decades ago and weren't even a realistic career path options right so maybe some
engineering roles will replace or outgrow others but if there is one profession or skill that
will grow in demand that's Engineers Engineers play a crucial role in the design development and
implementation of new technologies and systems and their skills and knowledge will actually continue
to be more in demand as long as there are new challenges and problems to solve so my latest
video on getting into it is probably even more relevant now than ever with the development
of AI considering the engineering jobs will become even more demanded that being said I
also believe that Engineers who do not learn new skills don't grow professionally and don't
adapt to the technological changes and kind of stay in their comfort zone just doing the same
tests will be placed or automated by AI at some point in the future but I think that's really the
exceptional case considering that it projects are actually very Dynamic and you always have some
kind of incentive to grow and develop your skills as an engineer so as long as you as a developer
grow your skills and knowledge at a normal speed which often happens naturally when you work as an
engineer I think you're going to be more than fine but even in the case where a specific engineering
role or job may become automated through AI having the base foundation in it will actually help
you transition to any other it field way easier compared to people who are just getting into it
without it background and we see that with system administrators who are increasingly transitioning
to Cloud engineering or devops engineering they're just adapting and it's of course way easier for
them to transition on into those fields having the backgrounds that they have so overall Engineers
are needed now more than ever given the speed of development in Tech world but you need to be ready
to learn new things and adapt to the changes in Tech and one of those new things you will need
to learn as an engineer may actually become the actual skill of using AI tools things like
prompt engineering which basically means preparing formulating your requests in the best way to get
the most optimal output from the AI tool in that sense AI should be considered as an additional
tool in the toolset of an engineer to do their job so that's my take on the whole thing I would
be interested to know what are your thoughts on AI and have you actually used any AI tool maybe
chat GPT at your work already as an engineer and what were the results share them in the
comments because I'm sure it will be interesting for other viewers as well and with that thank
you for watching and see you in the next video