Hello everyone this is Alonzo once you know
how to get your first cloud job please register for our webinar will teach you everything
that you need to know and answer your questions along the way hope to see you there Hello everyone this is Alonzo wants to know
how to get your first cloud job and please register for our webinar will teach you everything
that you need to know and answer your questions along the way hope to see you there? My name is Richard Im cloud hired I can see
I am cloud hired that yes, come and join and get cloud hired. I cloud tired. I'm cloud hired. I'm cloud
hired. Hey. Go Cloud Architect family. I'm cloud hired. Oh, guys. I'm cloud hired. I'm
cloud hired thanks to go cloud architects, it worked for me a now I’m cloud hired because
because of go cloud architect’s program. I am cloud hired. I am cloud hired. Thank you, Mike, and the
go cloud team Welcome back, everyone, we're here for day
three of the completely free, AWS Certified Solution Architect Associate 2023 course.
My name is Michael Gibbs, and I'm the founder and CEO of go cloud careers and I'll be your
instructor throughout this week. I have my master producer here is day two,
which dated I call it. You said day three, you are too excited you yourself. It's day two. Apologies, everyone. There's
been a lot of things going on in our world right now. So even I get my days confused
periodically. So day two of the AWS Solution Architect certification course. You know what
some of the things we'll cover here will help you with some AWS Solution Architect interview
questions. And this is a complete AWS full course tutorial, and free AWS training. And
it's a free AWS certification course online. So with this free AWS course we're going to
help you pass the AWS Certified Solution Architect Associate the exam just to you know, is SAA
does SEO three, it used to be the SAA does co2 But this is a more modern exam, the SAE
ACO three, and we're gonna have a lot a lot of fun here. Now, I want to make sure that
you all know some things we're going to do to help you, I want you to all sign up for
the how to become the ultimate cloud architect or how to get your first cloud architect job
webinar. It is tomorrow evening. And we will tell you everything you need to do to get
your first cloud architect job certifications, maybe three to 5% of what you need to do.
And I want you to all get these elite cloud architect or AWS Solution Architect jobs,
or Azure Solution Architect jobs or Google a Solution Architect jobs, I want to know
that you've got the best cloud computing career because that's what we're all about. So join
us tomorrow on the completely completely free, AWS Certified, I'm sorry, on the completely
free how to get your first cloud job. As a cloud architect, it will be well worth your
time, and will be on a platform where we can speak live and talk to each other so we can
make sure we can answer any questions you have to help you build the best cloud computing
career. While we're at it, I want you to completely download the completely free AWS Certified
Solution Architect, associate and professional ebook and labs, you know, we're going to be
focused heavily on the concepts here. Why? Because you get hired based on the concepts
if you're a cloud architect or a Solution Architect, guess what you design, present
and sell, you don't even touch the technology. But it's more than that. If you're a cloud
engineer, you're going to be performance tuning. And the challenge is knowing what could do
not how to do it how to do it is nothing, you can go straight to the AWS website. And
they've got complete step by step instructions. So in the labs, which you can download, you
can watch on your own time, you can practice those as well. But guess what, we're going
to focus heavy, heavy, heavy on the concepts because that's what it's going to take to
win the interview. And that's also what it's going to take to be able to know how to do
any of these jobs. I've interviewed 1000 AWS Certified people that took the courses out
there, which is why we do ours free. And while they all knew how to configure, they didn't
understand what they were doing and why they were doing it, which meant they were unemployable.
And I went to getting the best jobs, because we're all about giving getting you all cloud
hired. So make sure make sure that you attend the completely free how to get your first
cloud job in America and get the free additional AWS resources because I want you all hired,
passing your certifications and having the absolute best career. If you miss day one.
Don't worry about it. You can catch day one on our YouTube channel each day stands alone.
And I go back and watch it tonight. And enjoy day one and today we're going to cover day
two I know I called it day three but it's really day two today. I was all excited about
all the things we're doing. Because let's face it, that's just me, I love doing these
things. Many people ask us which AWS certifications to get as a rule, the AWS Certified Solution
Architect Associate is the starting point. Now which certifications you should get is
completely dependent upon the career you desire. But the AWS Certified Solution Architect Associate
is a basic intro to cloud computing. And that's what we're going to do. So let's start talking
about the AWS Solution Architect, training, because this is an AWS cloud computing full
course at least as it pertains to passing the Certified Solution Architect Associate
exam. So yesterday, we talked about the orchestration of the cloud and getting our data to the cloud
and storage. But today, we're going to begin by talking about computing on the cloud. But
before we do that, can you guys all give me a hashtag that says AWS Solution Architect
Associate Certification in the chatbox. And that way, I know you're awake, alert oriented.
I'm a medical person, we medical people like to know that the people are know where they're
at. That's why when somebody falls down, we say, You know what time it is, you know who
the president is, you know where you're located. So I like to know that you're all awake, alert
and oriented. This is medicine, we would call times three, which was my old fun career internal
medicine. Before I went into tech 25 years ago, and let me tell you, I will never go,
I've never looked back. And I love Tech Tech Tech Tech tickets, the greatest thing. So
for all you guys that are putting Cisco Cisco over there, I love that name. I spent about
a decade at Cisco, AWS Solution Architect certification, fantastic. A diverse Solution
Architect certification, I love it. I'm seeing all this AWS Solution Architect certifications
out there. So I know you're awake. I know you're here. And I know you are ready to go.
Love it. So fantastic. And if you're a nurse, you can that's wonderful, wonderful, wonderful.
I was a nurse and then a nurse practitioner, and I was a firefighter paramedic even before
that. And health care, people become great architects, because they're used to asking
people questions, and they're used to communicating with people. And for the nurse that's here.
If you ever had to sell your patient into taking a medication, which I know you have,
or following healthy lifestyle guidance, guess what? We do that with technology all the time
as architects, so let's have some fun, and let's get into the content. So a cloud or
cloud computing is nothing more than renting space in somebody else's networking data center.
That's it. It's somebody else's data center, the red space. And again, what is involved
in these things, it's routers and switches and servers and storage, physical load balancers,
firewalls, intrusion detection, intrusion prevention systems and cabling. That's it.
That's what the data center is, and guess what the cloud is. It's renting space in that.
So when we talk about computing, guess what, we're going to be talking about the same thing,
because cloud computing is nothing new. First cloud I worked on was in 1996. And it hasn't
changed very little, even though the marketing people does. So renting space and somebody
else's, even though IKEA, I am thrilled to see you here along with Lady guitar. So let's
talk about computing. In the data center, we've got these physical servers, everybody,
okay? physical servers. And in the cloud, they still have the physical servers. Now
in our data center, typically speaking, we take these servers, and we either put VMware
on it and virtualize the servers. On today's modern world, we either use Nutanix, which
is one of my favorite hybrid cloud solutions, or we use IBM OpenStack, which is more commonly
used than Nutanix. But they're all great private clouds. And what are these things? Do they
enable us to take our servers, virtualized the servers and reuse them to full capacity
and scale up and scale down just like the cloud? And when you're on the cloud, guess
what? You still need servers, even serverless uses servers, it's a marketing term no matter
what you're on servers. So in the data center, we call it a virtual machine. And guess what,
what are we going to be talking about today on the cloud virtual machines, nothing's new.
So if you've used a VMware virtual machine, or a Microsoft Hyper V virtual machine, or
if you've used a Citrix virtual machine, or if you use the KVM or QEMU virtual machine,
guess what we're gonna be using the same identical stuff with a new name on the cloud, like the
husana Meet the new boss, same as the old boss, it's the same stuff. So AWS is going
to call your virtual machines acpl instances, because that's what they paid their marketing
people to do. Google is going to call them Compute Engine instances. And Azure and Oracle
are gonna call them virtual machines, which is really what they are. They're a little
little more you know, by a glass of water, they call it what are not a hydration system,
whereas AWS has to throw the term elastic and come up with all kinds of funny names
for everything. So when you pick virtual machines on the cloud, how do you size them? The same
way, you've done it for the last two decades in the data center, you size it based upon
CPU cores. you size it based upon DRAM. you size it based upon storage capacity and performance,
which are typically going to be using block storage for like we talked about yesterday,
unless it's a dedicated server. And we size them based upon networking performance. That's
it. So if you need to figure out what you need, how many cores do you need, and how
much DRAM do you do? Now I want to make it very clear. bare metal server performance,
like the data center, and cloud server or any virtual machine performance is not the
same. For example, if we use my favorite servers right now, which are using these beautiful
AMD epic processors, and we have a server that says has 128 cores, and four terabytes
of DRAM, that's 128 physical cores, physical little CPU chips on the CPU. Now on the when
we deal with the servers, they can do something called hyper threading, which is where then
the core can split into multiple cores. Now, you've got a car that can drive at 90
miles an hour, and you split it into two cars, each car can drive 45 miles an hour, seeing
things so in the data center, if you've gotten 128, physical cores, they split into 256 virtual
cores. So when you buy 128 core server, say from Dell, or IBM, you're getting 128 physical
cores. When you buy 128 core server on the cloud, you're getting 120 virtual cores. So
it's basically 50% of the performance of your actual server. So keep that in the back of
your mind. Now, the reality is, in many cases, those virtual cores aren't being used. So
it's closer to the performance of say, 75% of the actual server. But gotta keep that
in mind, physical cores in the server and virtual cores. So when they're selling you,
virtual cores are not exactly the same as physical cores. And you're going to have to
test and kind of get the performance that you actually need. Now, the marketing department
decided to take something, basically, that is simple, and efficiency, and come up with
a bunch of silly names. Look, here's the reality, if I need a server, I'm going to basically
see which has the right cores, which has the right d RAM and which has the right network
performance. And I don't care about any of these silly letters, I'm going to google it.
But you know, AWS has pre made servers specifically for pre made uses, which may or may not match
your needs. And they basically came based with ARM based workloads, ARM based workloads
are great for web servers, they're super low power draw. They're not the highest performance
thing either. They've got your compute optimized, which you can, which you can work with, for
example. They've got your G based servers, which have GPUs in them, basically use them
for machine learning, if you're going to use your build your own machine learning environments
using say pytorch, TensorFlow or any of the machine learning tools that you might use,
you've got your eyes for high speed storage, you've got your M fi for general purpose to
do use it for databases, your M six again, or general purpose, but more application servers
or gaming servers, you got your R Series, which are really memory optimized, when you
need to lots of memory like a cache, for example, you got your T three, which are basically
burstable, which I think they get a little performance and you compressed a little more
out a little capacity when needed. Test Environment, realistically speaking, you got your ex ones,
which are really low price for DRAM and you need these high servers, huge in memory databases,
for example, you need four, six terabytes of RAM. These are the systems that you're
actually using. So kind of keep that in the back of your mind. They pre made these. And
they be may or may not be exactly what you're looking for. But really, you're going to be
sizing no matter what these things say based upon CPU, DRAM and network performance. Now,
realistically speaking, your traditional virtual machines on AWS support Linux on Windows,
like things you're going to stick on, whether it be Red Hat blonde to their Amazon Linux
is an offshoot of Red Hat Linux, just like Oracle Linux is an offshoot of Red Hat Linux.
The most common operating system on the cloud is not Amazon Linux, it's Ubuntu Linux, which
is what most people use, but you can use any of that you want. Any which your applications
are going to perform better. Typically, machine learning applications, for example, are better
on a button to super stability. Things often are better on Red Hat, but it's up to you,
your systems administrators to determine what's the best operating system for you. For the
most part, you can fill the plan I In this many Linux servers, but somebody is gonna
get closer on the Linux side, the Linux engineer is gonna help you select that. Now, AWS also
has an easy to instance that runs the Mac operating system, they call it mission critical,
but it's not mission critical. And here's the reason why it runs on a Mac Mini. And
a Mac Mini is not a mission critical system, it does not have a Xeon type CPU or an epic
type CPU, it does not support error correcting RAM. And it doesn't have any fault tolerance
in it whatsoever. So it's not mission critical. But it's great those Mac EC two instances
for, say, an application developer that needs to compile code, and they don't run on Mac
or use a Mac. Normally, you can use pre built virtual machines like many people do, or guess
what, you could use a custom virtual machine, you can create it just like you would in the
data center. pre built virtual machines are available from Amazon as machine image. A
machine image is basically an image of any server no different than a VMware image of
a server that you've been using for decades. Azure has their own images, Oracle has their
own images, Google has their own images. And typically, your machine instance is going
to need a compute system to run on. And it's going to need some block storage to store
your data. Now, like I said, you can build them, you can use a stock thing like the stock
operating system and build on top of it, or you can buy one. So let's say you have some
real security, you're not going to be using AWS WAF. And we'll talk about that when we
get to the security section, you're going to be using an industrial grade firewall,
if it matters, say something from Palo Alto or from Cisco, or from Fortinet or from checkpoint,
you're going to be getting that from the marketplace. And it's a prebuilt virtual machine optimize
operating system is fully hardened. And it's gonna with the precision firewall fix services.
And we can get, we can take one of our virtual machines in the data center and convert it
to an Amazon virtual machine image and relaunch it in the cloud. And that's what we're talking
about. Basically, we're just uploading our fifth our servers. And we can also just like
we do in the datacenter, VMware gives you an environment to take a physical server,
a bare metal metal server, and then turn that into an image that you could then run on the
VMware server, we do the same thing on the cloud, no different and there's tools to do
that. So let's talk about the Amazon machine image, I mentioned has an operating system
that's going to have lunch permissions, and it's gonna have a block device mapping, which
basically says, which block storage, you're not going to know when you've got this image
of a system. It's a single file that contains everything on the hard drive. It's really
cool. For many of you guys that are a little older, like me, if you've used semantic or
Norton Ghost knowledge, semantic ghost, and we could take a whole window system, we copy
it to an image, and then we could push that to 100 different computers that were configured
identically. That's an image sync thing. The image is that snapshot of a machine, we can
stick that in different regions, we can stick it in different clouds if we want to. Now
AWS would tell you to take that image and stick it in a different region for disaster
recovery. It's insanity, I'm going to tell you why it's insanity. You don't backup your
stuff to the same place. And the single cloud is the same place. Even if it's a different
reason. You Back it up to another environment. I wouldn't I wouldn't put like if I was worried
about my bank going bankrupt, like Silicon Valley Bank, just, you know, kind of had some
financial issues. I wouldn't put my money. And back in Silicon Valley Bank, I use a different
bank. So if you're going to do some disaster recovery, and we'll talk about that, don't
stick it in the same file. That's as ridiculous as you're worried about your bank putting
your extra money in the same bank and hoping that the banks, okay, it just doesn't make
any sense. Now, when you're dealing with virtual machines, what we're really talking about
is as follows. You got your operating system. Now, if we wanted to basically set up a virtual
machine to configure itself, like we've done forever, we'd write a script, typically a
bash shell script, or Windows, we'd write a PowerShell script. Systems admins have been
doing it forever. We can do the same thing on AWS. Let's say we've got auto scale and
we wanted to configure our servers coming up. We can basically write a little script
many bash shell script that for example, they call AWS COVID. A bootstrap script remember
bootstraps route for your for your exams. And realistically speaking, we can say update
the operating system. So let's say it's an Ubuntu system sudo apt apt get update sudo
apt apt get upgrade? We could have it immediately do that, and then we could have it so install
our web environment. I'm an architect and architects don't touch the technology. But
I think from I think I remember with a bunch of system, it's sudo apt install httpd or
Apache, we could set that up to install the web server, for example. While we're at it,
and basically, they're just simple scripts. Now, we're going to talk about the way you
rent your systems. A cloud is is like a hotel, you rent a room at a hotel, you rent space
in the cloud in somebody else's data center. That's it. So how do you rent it? Well, there's
a lot of ways that you can rent it. And we're going to talk about the renting options. The
first is on demand. What does that mean? And when when I use this, I started a new website
for my cat, Cindy, I have absolutely no idea how many, how many how many hits per day she's
gonna get. And if I knew, I would specify it, and I'd get a cheaper rate. But if I don't
know, I could stick it on an on demand server. I'm going to pay by the second. And what will
ultimately happen is, if I need capacity, I'll add other servers. Now, let's convenient
to not know what I'm going to pay for. So guess what? I pay extra, because I don't know.
So kind of keep that in the back of your mind and on demand instance, is as follows. You
purchase it, and you don't exactly know how much you're going to use. So you can on demand.
So on demand is very useful. And even if you do know what your capacity is, you're probably
still going to use on demand. I could also say specify five servers and use on demand
for additional capacity. So on demand, you're not 100% sure what your needs are, where you
might scale. You pay more but for the for the youth, but it's not always there. The
next purchase or renting option, if you want to call it is something called a reserve. What is that? Well, it's a follow. I know
I need 10 servers with 28 cores, and 128 gigs of RAM. I tell AWS I'm going to use 10 servers with
128 cores, and or 24 cores and 128 gigs of RAM. And guess what? I tell AWS I'm gonna
buy it for one year, or for three years. And the longer I commit to it, the cheaper the
price will give me why I just committed to it. Then again, if you're going to commit
to something, it's really no different than calling Dell and buying it for your own data
center. But there's that. But when you tell them, I'm gonna buy your stuff for a long
period of time, it enables AWS to know how much capacity they're going to need for their
systems. And they'll either know to buy new servers to support your needs. And that's
the way that work. So on demand, pay by the second, you pay the highest price, reserve,
and you can reserve it consistently full time, which is what we're talking about. You pick
a duration from one to three years. And the longer you commit to buying somebody else's
stuff, the cheaper the rate you go. And the next thing that we'll talk about is scheduled
reserved instances. So let's say for example, I know that I'm going to be running a big
batch computing job, and it's going to be every Friday, every Saturday and Sunday, and
it's going to be for 48 hours straight. I can tell AWS and schedule this capacity, and
I'm gonna prepay and commit to or at least commit to paying unnecessary prepay for say
three years. And I'll get a cheaper rate than if I used on demand. So on demand, pay about
the second highest price. Next low, the lowest we're gonna get here for Well, the next one
of the lower prices is going to be to reserve it for one to three years scheduled reserve,
we're going to pay more than if we reserve it constantly for three years, but we're still
going to get a discount on our rental prices. Now the cheapest option is something called
the spot instance. Google calls it a preemptable instance what's a spot instance? AWS usually
has extra capacity. And you can bid in an auction like manner on this x extra capacity.
If your bid gets one, you get cheap access to compute power. Sounds great, right? There's
always a caveat. Always a caveat. If you're using a spot instance, and somebody outbid
you on that instance, your system gets shut down, whatever you're working with stops and
you're out of luck. So, so Spot Instances, which are basically virtual machines that
you bet on are great. If you've got something that's not super important, and you've got
systems that are tolerant and being turned on and shut down, but don't put anything that
matters on it on demand, highest price, standard reserved instances where you're scheduled
for one to three years, low or one of the lower pricing options for long term consistent
use scheduled reserved, I'm going to reserve it every Saturday on Sunday or something like
that. You get a discount for it. Spot Instances the cheapest but do you really want to put
your systems on something that wasn't get shut down? Because somebody else tells you
when you're out of luck and you're offline? Probably not. So maybe good for experimentations?
No, we'll talk about tendency options. Typically speaking, when you're dealing with a cloud
computing environment, there's something called oversubscribed and your stuff is based on
what's oversubscription. Just like your internet service provider, your internet service provider
does not have enough capacity for everybody to use their stuff completely 100% at a time,
AWS won't really talk about this, but their service provider and all service providers
do this, generally speaking. So if I've got 120 cores on a server, I might sell 156 cores,
because we assume most people aren't using them all at the same time. Now, if everybody
uses their stuff, on the same time, there's performance constraints. I'm sure all of you
have experienced oversubscription in your life, when we all got stuck working home from
from home from COVID. I mean, I've been working home from decades. But when we all got forced
working from home from COVID, and all of a sudden everybody's in their house, and the
kids are watching YouTube videos, and somebody else is playing games and somebody else is
watching Netflix and you're trying to work on your internet came to a screeching halt,
is because your internet service provider did not have the capacity for all the people
that would be home at these times. So anytime you're dealing with cloud computing, it's
a service provider know they're going to be oversubscribed on their networking on their
computer. But let's talk a little bit more about shadow tenant. This is standard, you
rent some compute space from from your cloud provider, and your stuff is on there, your
competitor stuff is on there or somebody else's systems are all on the same server that shared
tenancy. This is standard unless you do something else about it. Now the next thing is something
called a dedicated instance. Now this is basically where you say, Go cloud careers is reserving
this entire server, an all of our virtual machines will be on that we can guarantee
that we're not over subscribed this way. And we can guarantee for security purposes that
nobody else is on our server is there usually speaking a security risk of having other people
on the same server, not really, the hypervisors are pretty darn secure. But if we want to
know that whole servers ours, we can do so. Now, what's the next option, the next option,
if you really need performance is to get a bare metal server. This is going to give you
the same performance that you had in the data center. Hey, wait, I can conceive data center
like performance of the cloud. That's how you do it with a bare metal server. And on
your bare metal server, you can do whatever you need. Maybe you're running some an application
that needs access to the actual MAC address on the Ethernet cord or a serial number. Or
you need to stick a security key and not have those kinds of things. That's the bare metal
server when a secure windows when you need access to the physical hardware. And guess
what, you got a staff that's fully trained on Nutanix. And they don't want them to learn
any of this AWS stuff, guess what, or VMware. You can purchase bare metal servers and run
your stuff directly on it. And then you don't even have to train your people in AWS and
AWS is basically transparent to them. You're just using their stuff. And it feels like
Nutanix or it feels like VMware. For the most part, it minimizes the learning curve for
you to do this. So now we know a little bit about the tenancy options. And we know a lot
about the purchasing options. Now there's more to cover on these virtual machines. Chris,
how long have I been speaking because we may need to take a few questions. Yeah, it's time to take some questions. Let's take some questions. All right. And
I'll quickly talk about the DevOps vs architect for about one minute even though it's not
related to the concept at some point. All right, so before we take some questions,
I want to ask everybody if you like what you're hearing and seeing, hit that like button,
hit that subscribe button, hit that notification bell button so you don't miss the next two
sessions that we've got. A see we've got 344 People really watching but only 104 of you
like it apparently. So if you're enjoying it, make sure to hit that like button. So
let's get to some of these questions. And then we'll get back to the content after Mike has finished with these questions. Why
don't you go ahead and start with that DevOps versus, I'm gonna deal with the DevOps one, as well
as the Linux and Python question at the same time. That came out of butser. DevOps is a
career for software developers, because you must be a great programmer first, who automate
software release cycles. They get involved in tools like Jenkins and Spinnaker. And they
are 100% related to automation. And they all they are as a techie, that's all day long.
Automating software releases, cloud architects design present until technology, we don't
touch Linux, we don't touch Python, and we don't do DevOps. It has zero related to our
career. They are completely somebody else's career. Kind of the difference between an
airplane pilot and an airplane mechanic. An architect is a business executive who designs
presents and sells a technology solution. And there's a list of everything you need
to know to be an architect, and we're going to talk about all of it tomorrow, on the completely
free how to get your first cloud job. DevOps is a completely different career. Linux administration
is completely different architects are not allowed to touch the technology. If they work
for AWS, Azure, Google. Oracle, if they work for Accenture, Capgemini, Deloitte, architects
design it and sell it present it. Cloud engineers go and build it DevOps engineers go and build
and automate things. Once it's fixed, finished, it goes on to a maintenance team called sis
ops people. And once it's done that, if anything breaks, they call the Technical Support Center,
which is a different team. So now let's get back to the content. What's the difference
between bare metal physical and virtual server? A physical server is a bare metal server,
meaning you call Dell you call IBM to call HP and they ship you a server that's bare
metal. A virtual server is after you take that physical server, and you install hypervisor,
the AWS nitro hypervisor, the VMware hypervisor. Nutanix has a beautiful hypervisor KVM, which
is the hypervisor I'm pretty sure that's used in the OpenStack cloud. And then you chop
that server into multiple little virtual machines, which is a logical image. And that's there.
In fact, let's kind of Oh, actually, I thought I had it. I thought I had it in here. But
for some reason, maybe I don't have a picture. And that's such a good question. I'm actually
going to draw it out for you give me a minute. Okay, so let me do this. I'll actually go
back to my, my whiteboard, because I love this question. And I would go once you guys
to understand it. So let's go to season two, this is not okay, this is tenancy options
are here and create my slide, which is really what I wanted to do. Okay, so here we go.
Because this is a great question. We got a physical server. So let's say this is our
server hardware. This is our server. Next thing we're going to do is we're going to
install a thin layer of software, it's going to be called the hypervisor. And the hypervisor
is what's going to be involved in chopping up the server, two little mini servers. And
here, what we're going to be doing is we're going to be creating virtual machines all
on the same server. And what happens is, the virtual machine is going to have its operating
system, it's then going to have to do this. It's then going to have its application something
dependencies. And that's what it's going to be like so this could be a way that so we
can have one of these on the server and we're going to typically speaking of multiple, we
can have an another one. We could have another one. We can have another server, and we can
have another server. So that's typically what we're speaking with. We've got our server.
On top of that, we've got a hypervisor. And then we got all of our virtual machines and
one virtual machine could be Windows, one virtual machine could be Red Hat, one virtual
machine could be Ubuntu. So what we're really dealing with is taking one system, chopping
it down to another system. So virtual server is any one of these things that we're actually
dealing with any one of these things is a virtual server. The physical server is the
thing that you actually buy. Bare Metal is the physical server that has nothing on it.
Great, great, great question. How often does Spot Instances get shut down? It's based upon
utilization bidding and capacity and it changes over single day isn't worth the risk. It depends
on what you're doing. If I'm doing a test, it might be worth the risk. If I want to test
the 1000 routers running on virtual software and stretch them out for a period of an hour,
yeah might be worth the risk. Am I going to put something important on it? Of course I'm
not. So everything is everything with architectures based upon in business is based upon business
requirements. 100% There's no best tech ever. There's what works. And yes, there's entry
level Cloud Architect roles. I get people hired every day with zero background whatsoever.
Or any use good use case to Spot Instances? Yeah, if it doesn't matter, if you're looking
for cheap compute capacity, and it doesn't matter. Yeah. But for anything real, I wouldn't
be using it. test environments are beautiful. So a dedicated instance is different than
a dedicated host. A dedicated host is a bare metal system that you can do anything you
want with. install VMware ESXi, install Ubuntu Linux directly on it have physical access
to the hardware, and be able to use all the cores at maximum capacity, including physical
and virtual, a dedicated instance is basically running the AWS hypervisor, and you can create
all your virtual machines that are specifically AWS virtual machines, all there. So kind of
keep that in the back of your mind, there is a difference. can use VMware, AWS VMM is another problem
with Google Cloud, well, what you can do is you can create your virtual machine, which
is what I would, and then I would have that converted into an AWS virtual machine image.
And you could have that one converted into a Azure virtual machine image and another
one Google and then you can run three different clouds on it. So yes, you can convert one
to the other, but I would create my own virtual machine, the hard way, the original way, like
to VMware and then move it is the MO hybrid cloud, running VMware and connecting to a
public cloud as a hybrid cloud just like running Nutanix and connecting to a public cloud as
a hybrid cloud, just like running OpenStack and connecting it to a cloud as a hybrid cloud.
What are the use cases for a dedicated host, dedicated host, I want 100%. Total Control,
I want to put VMware on my system, so I don't have to deal with any of these AWS Management
consoles, and I want to do it my original way dedicated hosts. Or there are certain
critical business applications that require access to the physical hardware, and they
look for the physical hardware to boot up for the licensing purposes that must go into
dedicated hosts dedicated instance would probably be a good case, as I'm the US government.
And I don't want the Chinese government systems on the same system, or the Russian systems
or the Greek systems or the Israeli systems on the same system server physical server
as mine. That's when you use a dedicated Gnosis. Good question. We will discuss NAT gateways and Nat instances.
And we will cover that when we get to that content. What are some fundamental skills
needed to have better understanding of this course, network and data center, if you don't
understand the network and data center, you will never understand the Cloud. Please join
us tomorrow on the completely free how to get your first job webinar. And we will give
you 100% list of every single skill you need to know to be employable as a cloud architect. So let's get back to the content if you can
give me a hashtag. And the hashtag could be AWS Certified Solution Architect Associate. Okay, so let's get back to the content. Now
in AWS world. They tell you, you secure your virtual machine access with the security group,
I'm gonna tell you right now, that's great if you want to get hacked, but if it really
matters, you're gonna need to do much more, you're gonna need host based firewalls Host
Based ideas, you're gonna have to be removing unnecessary packages from the operating system
and closing unnecessary ports. But in AWS certification world which is very artificial
in nature, and contains about 5% of the knowledge of what you need to know to build any good
career. They say you can secure your AC till instance by six using a security group, which
is basically like a host based firewall, but that doesn't mean you shouldn't use your own
host based firewall to an all high security environments do So what's it look like? We're
going to talk about the concept of a security group. We'll talk about it much, much more
when we get to the security section. But I just want to briefly touch on that. Now, when
you set up your virtual machines, and if you download our free Labs, which is we gave me
the link earlier in the video, you'll be, you'll be setting up security groups. And
basically what it is, is it's a pre firewall before your traffic gets inside of the of
your virtual machines. Keep that in the back of your mind. And we'll we'll discuss much,
much, much more and much, much, much more depth are with me when we actually get to
the security section. So how do you give an IP address to assist them? Well, when you
set up your VPC and we'll talk about that much more later, and you're setting up your
virtual private data center, because that's what your VPC is, they call it a virtual private
cloud, but it's a virtual private data center, not a virtual private network, what you're
dealing with is you have to create your own IP address, base and cider range. And inside
of that every single virtual machine that you have, is going to be given an IP address
on the subnet that you create. And it's going to receive that address via the Dynamic Host
Configuration Protocol. Now, if it's inside of your VPC and it's internal, you're going
to be using private IP address spaces. And if you need to connect it to the internet,
you're going to need a global or a public IP address, which of course, the world calls
it a public IP address and AWS marketing people name that name that an elastic IP address,
I don't know where they come up with these marketing names. So keep that in the back
of your mind. Now you can also get a ipv6 address. Now all ipv6 addresses are public,
we don't have any kind of RFC 1918, private IP addressing spaces like we normally would
keep that in the back of your mind. They're automatically assigned an ipv6 address if
you don't need ipv6, shut it off. The more addresses you have, the more things that are
open, the more the world can hack you. So only use what you need. Now, how do you manage
these systems? Well, how would you manage any virtual machine, you can either secure
shell into them or SSH, just like you would any other Linux machine, or router or switch
or viral. You could use the AWS Management Console, which is a web based browser based
way to do things, it is super easy. It's basically self explanatory, click, click, click, click
and it's done. And if you don't know, you can just Google the instructions. Having said
that, it is slow. Versus SSH, which is much faster if you know the commands. And great
question there from George, do I have anything he gets the marketing team? No. But there's
millions and millions of people that are 100% Confused by whether something is because the
marketing people made up funny names. If it's a Windows system, you can manage it via the
Remote Desktop Protocol, RDP. And you could do a lot of the management via the software
development kit. And here's how you can really set these things up. The whole world uses
TerraForm, they're gonna have one DevOps engineer deploy 1000s of these at the same time, in
many cases, and they can all do it via infrastructures code. So that's how you can set up these things.
Now we're also going to talk about outpost and what is outpost, it's fairly new. It's
a fully managed service that uses a virtual machine or an EC two instance. And something
that's an AWS supplied instance, it's a physical appliance that shipped to the customer and
plugged into the customer data center. Why? Because the latency of going to the cloud
is high versus running a virtual machine in your data center is there. So you can order
a server directly from AWS, stick it in your data center, and that's called an outpost,
I'm gonna tell you right now, you could buy that server from Dell for far, far, far, far,
far less than it would ever cost you to rent it out from AWS, at least, at least 400% less,
but it's up to you and how you want to do it three 400% cheaper to buy your own server.
But if you buy your own server, you're gonna have to have some knowledge on how to set
up the virtualization. If you buy this AWS outpost, the cool thing is, you can just click
a few Buck buttons on the AWS Management Console and set up your servers and it's fully managed
by AWS. You don't have to think about patching your hypervisor and things like that. So it's
the convenience factor. And in fact, when you go to a convenience store like a 711 in
the US or a Walmart in the US, you pay a lot more than you would in the supermarket. Right,
for a convenience. That's what we're talking about here. They do all the work for you,
they manage it. And because of that, you pay more but you got great performance just like
close to buying your own server. Okay, now we're gonna get into databases. So to make
sure I know you're here, give me a hashtag that says Databases. I see a hashtag that says databases give me
a hashtag that says Databases, we don't use acronyms. As architects spell it out databases,
your CEO doesn't know what you need. And an acronym, your hiring manager doesn't know
what you mean. And the reason we never use acronyms in technology, and some people do,
but you should never is they all can mean 10 different things. Somebody says VM, I don't
know if it's a voicemail vulnerability management or a virtual machine. And that's the point,
there's millions of things that all need the same thing. So communication, lack of clarity
causes errors. And I want you to have a great career and avoid acronyms whenever possible.
So we're going to talk about data bases. Even dB, I think of it means Dunnville. So keep
that in the back of your mind. That's why we never use acronyms, because they all mean
different things to different people. So what is a database? A database is this is an application
that enables us to store large amounts of information, large amounts. It facilitates
the sorting, calculating reporting and information sharing. And it is a critical component to
modern applications. Now, when we talk about these databases, I'm going to cover all of
them, I'm going to tell you which ones you should probably never use in real life. Why?
Because they will make multicloud impossible. And remember, 98% of organizations use multi
cloud. So I will cover all so you can pass the AWS exam, and then I'm going to tell you
don't use this, when it comes to things like Dynamo DB, and Amazon, Aurora, and a few other
ones, but we're going to cover them all. And then I'll tell you why you should or shouldn't
use certain ones. Now, when we're dealing with databases, for the most part, we're dealing
with the same ones that exist everywhere. We've got relational databases like Oracle,
we've got no SQL databases, like Mongo DB, or Apache Cassandra, we've got data warehouses,
which are things like Postgres. We'll talk a little bit about creating data lakes, I'll
show you the AWS way. And if we have time, I'll show you the real way. But we're gonna
first talk about relational databases. And this is the most common database that we deal
with in business. And it provides information and data that's related to each other. send
you the cat, Science Diet chicken. So Cindy, the cat goes and buy Science Diet chicken,
and got that in the database. Mike buys 500 pounds of Sandy the cat of, of Science Diet
chicken, because it's 30% off. Okay, great. So that's what we're really talking about.
For my team. Lots of people are having a hard time finding the webinar tomorrow, please
pop that in there. So people can reach it. And we have a great cut Cindy, by the way,
oh, hopefully she'll pop it. So the reason organizations use relational databases is
to find information that's related to each other. 20% off yield this much in sales 30%
off yields this much in sales and enables the business to make better business decisions
by finding information that is related to each other. Because remember, no business
buys technology because it's cool. They buy it to improve their business. So with a relational
database, we've got a lot of structure. Basically, we have a row. And we have columns, kind of
like an Excel spreadsheet exactly that way. Now, when you're dealing with relational databases,
they're what's called a Tomic. And what does that mean? They use this thing called an acid
model. You could find a certification question on this, but you should know those transactions
are all or nothing meaning sending the cat order got placed or getting got placed. That's
it. They are consistent. The second I purchase Cindy's cat food, the database knows it and
all the read replicas etc. all know about it at the same time. It's consistent. It's
isolated. I buy fresh shrimp for Sunday. That's on a different entry. Then when I bought her
Science Diet, and Cindy gets her shrimp every single day, believe me, she's not getting
that she's getting turned in scarves every day. She's a very Happy Cat. And adorable,
meaning, once an order goes into that database or transaction goes to the database, it is
not lost. So they follow the atomic, consistent, isolated and durable model otherwise known
as acid. So when you look at a database, the key is
relational database, show the relationships between variables. That's why businesses are
using if you've got your order ID, your customer ID or amount, your email address, the person's
name and everything that's related to each other. So transactions, things like that,
you purchase something, and it's pretty great. So that now now you know what we're talking
about information that is related to each other. Now, when you deal with Amazon, you
got two options for your databases, you really want full control, don't use any other database,
anything, set up a virtual machine and install the database yourself, just like you did everywhere.
If you don't feel like setting it up, you can take the easy way out. And you can set
these things up. Now the relational databases available on Amazon or Amazon, Aurora, Maria
dB, Microsoft SQL Server, my SQL, Oracle DB and Postgres, which is typically used as a
data warehouse. We're going to begin with Amazon Aurora, and I'm going to tell you right
now, you probably should never touch this, but it's going to be on your certification
exam. If you use a proprietary database, when you use three clouds, you're going to have
trouble. And 98% of customers use more than one cloud for good reasons, single clouds,
a single point of failure and amount of how many regions and availability zones you use.
Because a control plane failure, network failure, or hacking event will take down an entire
cloud, as we've seen AWS go down first, globally for seven hours, which they call a power failure.
Although I've never seen a data center power failure in 25 years. So Amazon, Aurora is
a proprietary, meaning it's going to cause problems if you deal with anybody else. WARNING
WARNING, WARNING WARNING WARNING is a fully managed relational database. They say it's
MySQL and Postgres compatible, which means it is but try and get your information in
and out, you're going to be using tools. Now, what is good about this Amazon Aurora database
is it gives you some of the enterprise grade features that you would get an Oracle type
database or a paid database. And it's relatively inexpensive. So there is design. And if you
only had a small business that you never thought was going to grow, and it could tolerate an
hour or a day of downtime, this might be fine. But I don't know any business like that. Least
the cons I work with, and Amazon will tell you, it's five times faster than standard
MySQL. Yeah, MySQL is kind of a really slow database, and three times faster than Postgres.
Now, it's kind of a SaaS application or software as a service application, meaning a server
list and there's no servers for you to manage. And when you don't have any servers to manage,
it's like going to McDonald's and getting a hamburger, you've got no control over it,
because it's managed by somebody else. Versus having your grandmother who's a chef. Make
you the perfect hamburger. Kind of keep it that way. So now we're gonna get into MySQL.
It is an extremely common relational database. It's open source, it's been around for decades.
Oracle owns it now. Even though it's free. And it's used in a wide variety of applications,
you've probably heard of the LAMP stack, Linux, Apache, MySQL, PHP. That's what we're really
talking about. Now we can talk about Postgres. Postgres is an exceptional data warehouse.
It's also considered a relational database, very enhanced features that mean huge functionality
improvements over MySQL. And that's one you can either use their setup, which is basically
going to be on an easy to instance, otherwise known as a virtual machine. And guess what
it's going to be using block stores. Or you can just set it up yourself, whatever you
want. Maria dB. Now, this is another really exceptional relational database. It's open
source, which means it's free. And it works everywhere. It's created by the people that
created my SQL, but it's got a lot more additional features and functionality. Now we're gonna
get into a paid database, Microsoft SQL, a lot of this stuff out there is on Server supports
my SQL Server 2008 2012 2014. And this basically allows organizations that have Windows one
workloads that are dependent upon Microsoft SQL SQL server to be used. Now, Microsoft
has very different clustering and failover options than most databases. And there are
four versions of it express web enterprise, and standard. Look, you can use any of these.
And if you needed something different, you can always build the virtual machine and install
it. Now we're gonna go to the king of all relational databases, the Oracle database,
it matters in business. For the most part, they're using the Oracle database, and is
one of the most popular relational databases in the world. It has, for the most part, one
of the most extensive feature sets and functionalities. And it's developed, licensed and managed by
Oracle, AWS, relational database service support standard one, enterprise, and standard. Each
of these versions has different performance, flexibility and scalability options. There's
two versions of licensing supported by AWS for the Oracle database license included.
And basically, relatively speaking, in this version, the database is licensed to AWS,
and you're using their license. And you can use standard edition one or standard edition
two, or you can bring your own license to Microsoft. And this is assuming you have a
license you bought it, I mean, bring it on licensed AWS, you brought it and then you're
going to host your database. And the best now you got lots more license and flexibility,
standard enterprise Standard Edition one, Standard Edition two. Now in a bit, we're
going to talk about tuning the performance of these things with read replicas, caching
and killing, we're gonna have lots of fun with that. But before we do, we talked about
the relational databases that are supported. Now we're going to talk about no SQL databases.
Now, a no SQL database is not new there from the 1970s, if I remember correctly, from IBM,
and what a no SQL database means not only SQL. So we talked about a relational database
having very strict columns and rows with data that's related to each other. Great for transactions.
But what if you need a little more flexibility? See, anytime you've got tight constraints,
this must be this, this this? You start limiting scalability. So when no SQL database was designed
to give you a lot of flexibility in the way you store your information, you can store
structured data, like transactions, unstructured data, like where you stopped in a Netflix
movie. Oh, by the way, there's this new show called Night agent, I was watching it the
other day on Netflix. And you know, it's pretty interesting. Every time you push pause, and
you come back to it hours later, it takes you to the same spot. Because they're using
a no SQL database. I believe it's Apache Cassandra these days. Meaning I'm 99% sure they're using
Apache Cassandra to store your place. And that way, it stores your information. And
you can pull it can't do that kind of thing with a relational database, video game people
that actually play a game, right? And they go back to the same game, they stop it that
they're using a no SQL database, because it's very flexible. And basically, what happens
is we've got these pairs, and we've got IDs and values, and that's how the information
is retrieved. So AWS, has their own managed relational database
called Dynamo debate, something that I will never use ever, ever, ever not because it's
a bad database. It's an exceptionally good database, but use Dynamo DB. And guess what?
Now I got a problem with Google, Oracle, Azure, Nutanix. And OpenStack, I can't have that
for my business. I can't architect single points of failure. So this is something that's
right for the trash can AWS invented it, now you use this, you're stuck, you're handcuffed
to AWS. And when they raise your rates, you're out of trouble when they have an outage or
you're done and there's nothing you can do about it. So Dynamo DB, trash can go back
to something like MongoDB or Apache Cassandra, but you got to do this. It's not that it's
a bad database. It's that I don't believe in architecting single points of failure into
environments. So we will talk about Dynamo DB, because they're gonna have lots of questions
of it on your exam, because they want to put you in an environment all cloud providers
are all vendors want to put you in an environment where you're exclusive to their stuff. And
it gets really, really hard to leave, like the Hotel California, so you can check out
anytime you want. You can check in anytime you want, but you can never leave. I'm not
gonna have a career as a rock star, which that's another story. But so let's talk about
it. AWS has a fully managed serverless database called Dynamo DB. They say it's highly available
as long as AWS works, it's going to work for you. It's serverless which means there's no
management Have the servers, they're still servers, still servers, but they manage the
operating system, your storage, the security of it, and hopefully they do a good job of
it. And it stores your information on an SSD storage for better performance. Now, the good
news is, it's got low millisecond latency and encrypts all your data by default, and
it can be backed up with little or no performance, and it can be set up for global cross region
replication. Now when I say that it's proprietary, that doesn't mean you couldn't convert your
information and move to a second cloud, the problem is going to be when you want to synchronize
your data between the Azure cloud and AWS cloud, you can't do it with Dynamo DB. You
also can't do with Google Cloud, big table. And you can't do it with any of the proprietary
databases. So you really need to understand that. So it can be set up for cross region
replication again, great, but if the cloud goes down, doesn't matter how many regions
you're in. Now, because we're dealing with a no SQL database, we're dealing with name
value pairs. And we've got what's called the primary index, which is basically your primary
set up. But we can also set up secondary indexes, which allows applications have access to different
query products. DynamoDB secondary indexes can be something called global or local local
indexes will have the same partition key as the base table. Global indexes can span across
all database partitions. Now, there's going to be some limitations on saying a key value
can't go above 10 gigabyte, but that's pretty big. And to increase scalability, DynamoDB
is eventually consistent. What does that mean? It means that if I write information to the
database, other parts of it for a second or two may not have access to the most up to
date information. So does it matter? It depends, if it's a bank, and I purchased a million
dollars a Cisco stock, and then I'm going to sell a million dollars for an FIR for an
eighth of $1. More a second later, well, yeah, better be better be immediately consistent,
and this isn't going to work. But if there's something where you stopped in a you were
you stopped in a in a Netflix video, doesn't matter. If it's inconsistent for a period
of five seconds, of course it doesn't. So as you increase the scalability, you become
eventually consistent versus instantly consistent. Now, of course, you could configure DynamoDB,
should you be using this thing to be strongly consistent, meaning instantly consistent,
but that'll knock down your scalability. So everything with architecture is going to be
a choice. Everything you do if one thing affects something else, it's like throwing a pebble
in a river, or a lake. And you notice that reverberates and you see these things. That's
why architecture and engineering are different engineering is focused on the tech architecture
is focused on the big picture, because you got to be able to see everything was going
good. Keep that in the back of your mind. Now with Dynamo DB need to understand that
you provision the capacity, you have to tell it how much you're going to need before you
use it. You provision your read capacity, and your right capacity a ahead of time. And
that way there's sufficient capacity for your needs. Now, this is really scary, you could
set up Dynamo DB for auto scaling. Now normally, auto scaling adds capacity when you need it
and removes capacity. Pretty cool, exciting. And that's the whole reason we're on the call.
If it wasn't for auto scaling, for the most part, the call would be more expensive and
lower performance data center. But auto scaling is really exciting. Now DynamoDB auto scaling
is about the worst I've ever seen in my entire life. It scales up and doesn't scale back
down. Why is this so bad? Let's say you had a period of five minutes, where you needed
to scale up maybe 30 or 40 minutes, and then your capacity was reduced for the rest of
the year. That means what's gonna happen is you're gonna pay for the rest of the year
for the peak performance that you had five minutes, which is something don't ever want
to do. So with DynamoDB, which I don't recommend you use because it's proprietary, and it's
going to lock you into a single cloud, which no customer wants. If you've got to use it,
provision it ahead of time and don't allow auto scaling to work because it scales up
and not know. Now, in order to make this look more attractive to people, they offered a
new option because because people are not going to use something that doesn't work and
multi cloud and 98% of organizations are using multi cloud, AWS had to do something, they
created an ability to create an infrequent Access table, which gives you lower costs
for infrequent data you can save up to but there's a 25% fee to save and retrieve your
data. And it's DynamoDB as always priced upon throughput on demand capacity as we're talking
about is going to be available for a higher cost than fixed capacity. So what are some
use cases that AWS will tell you for Dynamo DB, or the same as any other no SQL database
when near unlimited scalability is required when lower latency is required, because these
things scale very well on latencies are low. All no SQL databases, when you got to store
a large amount of stuff, Internet of Things, devices that are all over the world. And I
think even doohickeys here, she wrote a beautiful article. And she wrote it on edge computing,
where you got all these Internet of Things devices coming in. That's what we're talking
about. Dynamo DB, or any no SQL database would be great for that game player state where
somebody's in a video game leaderboard, that kind of stuff. Netflix movies, when Netflix
uses Apache Cassandra these days, but you know, keep that in the back of a huge number
of financial transactions, ecommerce shopping cart inventory and tracking. You know, that's
what we're going to use this. Now we're gonna get into a data warehousing now for everybody
in the entire world, that means Postgres. Keep that in the back of your mind. AWS will
also have their own one, which again, I'm going to strongly recommend you don't use
because it's proprietary. So we'll talk about data warehousing. Data Warehousing, is where
you store large amounts of data. Why do you store humongous amounts of data? Same reason
we do any piece of technology to improve business performance. Keep that in the back of your
mind. So we take all this information. And for example, we stick it in a relational database,
and then we can run a business intelligence tool. And the business intelligence tool can
help us visualize the data, we can then take this data out and we can prep it loaded, create
our data, lakes, things like that. Now, you know there is that. So again, I'm going to
show you the AWS propriety proprietary way to do this, which is not what I'm recommending.
Here, you take data, you can store it, for example, in an object storage bucket, which
again, we're going to use on all clouds, we love this so far, then you're going to have
to map and reduce and normalize your data. The rest of the world uses a Python spark
script, I'm gonna recommend you create a Python spark script, or at least your database people
do. Why? Because it can take the date of the Python spark script and use the same script
on the Azure cloud on the Google Cloud at the same time. And then you're going to stick
your information into your data warehouse, which I recommend Postgres, not this Amazon
Redshift. And then from there, you can look at your data with a visualization tool like
Microsoft Power BI, Amazon Quickstart. So, now, we're going to talk about redshift, which
again, is something I don't recommend you ever use. I recommend you use Postgres or
another data warehouse, but I would never use a proprietary anything, because it's going
to help it's going to hurt you when it comes to multicol. So Amazon Redshift is an AWS
proprietary managed data warehouse solution. It helps you just like any other data warehouse
would work. And it's going to help you find actionable information, you can use it for
business analytics. And you can use redshift spectrum to provide real time insights into
your business. We'll talk a little bit more about this proprietary database, data warehousing database, AWS will tell you
it's fast, powerful, and fully managed. It can do petabyte scale warehousing, as can
any Postgres. It's based upon Postgres, which is good because you can do SQL queries, and
it works with applications, but just use Postgres. And don't even deal with this. And then you
can use the same thing on multiple clouds. know, when you're dealing with Amazon Redshift,
the primary architecture is built upon clusters of computing nodes, you're gonna have a primary
node that's going to be considered a leader node, and the compute nodes are going to support
the leader node. And your queries are always going to be directed to the leader node. What
we're going to do because I know I've been going long as I'm going to briefly mention
data lakes in the context of what AWS would consider for your exam, which is typically
different than when we do in real life. And then we're going to open up some questions
before we get into the storage and things like that. What is data lake? Data Lake is
a repository where you store structured and unstructured data, it's typically an object
storage. The reason organizations create data lakes, is because we want to have create a
location whole process a large amount of data, and it doesn't require you to structure the
data as you would in a database. For one of the people with a blue wrenches. Can you find
the data lake presentation where we had Praveen a really wonderful Big Data Architect on a
cloud architect speak for about two hours on how to create a data lake and what a data
lake is and pop them into the chat box for everybody that's here, because we don't have
the multiple hours to cover that. And I wish I was, but I've not been for being those an
incredible amount of Big Data architectures. I am an enterprise architect, a cloud architect,
a network architect. And I'd rather you hear from someone with 20 years of Big Data experience
on this, because you get better information. So that's why we created data lake, we store
large amounts of information, it's cataloged information. And that way you can query data
and you can look for it. This is what the AWS, once you see as a data lake, you're gonna
have your data sources, which is going to typically be a no SQL database, your data
warehouse and your relational databases, you're going to typically have someone write a Python
spark script for your data transformation. And then you'll create your data lake. And
there's several steps in the process of creating the data lake of normalizing data, analyzing
data, but that's where listed what we're talking about. Okay, let's go to some questions. And
then, because I know I've been speaking for approximately 30 minutes, and then we'll get
back to the content, we'll have all kinds of fun. At least I'm having fun. Uh, you guys
have been fun. How to make a relational database highly available? Well, Abby, Chuck, one is
what none, two is one and three is greater than two. So we're going to talk about that
when we talk about database performance tuning, but never have one. And never have a single
database. And realistically speaking, if it needs really needs to be highly available,
put it in multiple clouds, not a single cloud. Great question. And we'll get to more of that
later. What DB options you have to save MongoDB data to AWS? Well, if I was you, I would stick
a Mongo DB servers inside of AWS. And that's all you'd have to do. Read Lavaca does it
like taking backup but read replicas are for something else. But it can partially help
you with backup with back but it's really designed for something else? We'll talk about
that more when we talk about high availability. How do you ensure high availability and data
durability for databases in AWS don't just stick them in AWS Look, we've seen global
outage with AWS recent global outage with all the cloud providers. If you use a single
cloud provider, I promise you, you will see an outage. And it doesn't matter how many
regions you use, all tech fails, all service provider fails. In network architecture and
Enterprise Architecture we've been taught for the last 25 years, never put all your
eggs in one basket. And finance they say diversify your portfolio. We always add redundancy.
No matter how many availability zones and regions to use in a single cloud, you are
architecting a single point of failure. If you stick it in one cloud, take your database
and stick it in multiple clouds. Now you've got a truly high nobody database. Can I explain
the advantage of redshift? None? Don't use it. Instead use Postgres. Anytime you use
proprietary for everyone benefit you gain, you gain a whole lot of problems long term.
I wouldn't recommend you use it. Can you convert to DynamoDB to another type? Yes, you can.
But that means you can't use multiple calls at the same time. So you could be with AWS,
AWS as another seminar or global outage and they make up and they talk about something
like a power failure. You don't believe it's a power failure. Now you want to go to Azure,
or they raise your rates and you want to go to Azure. And now you got you got a problem.
But Sarah, if you want to run Azure and AWS, at the same time for high availability purposes,
you can't be converting data back and forth to each other, you need to use the same database,
trashed Dynamo DB, and use Mongo DB or Apache Cassandra across your clouds and you'll never
have a problem. So you have to match storage sizing with databases.
Yes, absolutely. That's the way we've done it for the last 20 years. Does AWS support
Cassandra? Yes, they do. Do they do without having to provision a minimum underlying infrastructures?
Yes, they do. They have a managed keyspaces Mark. I don't recommend using serverless.
I recommend your architects design something in your engineers build it whenever possible.
You have no control when somebody else provisions and managers have no control whatsoever. You
go to McDonald's. And they say Would you like fries with that and your hamburger comes out
consistently identically in a good enough manner? Every single time? I haven't been
to McDonald's in 10 years. But you know there is that if I go to Morton's and ask them to
make me a hamburger, I can select prime beef prime rib, filet mignon, New York Strip to
grind that hamburger into and they can basically make it medium where medium well, or whatever
I want. When you use a managed services, you don't get any of that. It's the do you want
fries with that? Now let's manage services Good. Well, they're often a little easier
to manage. But there's a trade off, you're going to trade off something, performance
tune ability. So the customers that I work with don't use as many managed services, they
use their own. Remember, you know, if you use managed services, it's a little cheaper,
and it's a little easier to manage. You don't need as many expensive people working on it,
but you give up control. And is that a good thing? Sometimes, yes, sometimes no. If you
have a team of inept people, and you don't want to train them, great. Use a managed service.
If you have good people, and you're not worried about their training, you can do many more
things. So what are the business requirements will always determine what you should use? Can I explain the redshift primary node? Absolutely. So basically, speaking, redshift is monitored
around clusters of computing nodes, you've got the primary node, which is called the
leader node, and every other node is called a compute node and supports the leader node.
But all the SQL queries go straight to the leader node. Good question. How do you normalize
data in the Data Warehouse? Well, once it's in the data warehouse, it's normalized. Typically
speaking, when you go from one database to another database, you're going to have to
map and reduce it. Now, AWS, of course, has its own proprietary Elastic MapReduce, but
and that's really, we'll talk about what that's based on what organizations really do is their
way to Python sparks growth, pi Spark is really created for normalizing data. And you can
set that same PI sparks threat up for at least your data, people will do it. And we architects
on toucheth, for example. And they'll set up that same script. And you can run that
same script on Azure and Google, or Nutanix, or OpenStack. And that's why you know, for
us, you know, we like to use standards, and we try to avoid proprietary things whenever
possible. How do you maintain a basic between multi cloud the same way you synchronize between
multifaith and you've got IP connectivity? That's it. As long as the network works, all
this stuff works. Is it possible to run multiple databases in the database instance? I don't
know what you mean by that. You can partition the database, which is sort of like up and
not exactly. But I'm not completely sure I understand. What you mean. Can you virtualize
the entire data center? Yeah, that's what a cloud is. Every Data Center has been virtualized
for the last 20 years for the most part. If you and the second half of that was, if you've
virtualized data center, and it goes down, you're done. Which is, and if you, you're
done. And if you've got 10 data centers connected by the same network, or 100 data centers connected
by the same network and the network goes down, guess what? What you're dealing with is all that goes
down, which is why you should never use a single call. Same problem. DynamoDB is not
scaled down your retinas the fact that his maximum number increases in the data 27. I
don't know what you're referring to, according to the AWS documentation that scales up in
terms of the throughput and capacity, but it doesn't scale back though. Is there a difference
between managed and fully managed? fully managed means you got no control whatsoever? Manage
generally means you have limited control. hypervisors not only split the physical machine
into multiple logical machines, they do that. But they also combine multiple physical machines
into one logical machine. Is this correct? Generally speaking? No. They combine multiple
physical machines into a compute pool that you can pull from just like a cloud provider
with good question. Good question. So before we go back to the content, let's talk about
let's me give you give you a hashtag because I want to know your awakened learning. Why
don't we chain that the one that that we give the next hashtag to hashtag free AWS course.
That way, I know you're awake, alert and oriented and while I'm waiting for that, you know If
you get a VMware vSphere, one of these environments that takes all your servers and adds it to
a pool, tech electric, if you add getting Nutanix called or an OpenStack cloud, it's
going to do the same thing and add it to a physical pool. That's what cloud software
does. It's different than a hypervisor. It's the control plane that manages it. AWS has
the control plane returning. So it's the control plane, every cloud has a control plane. And
if that control plane goes down, you lose everything. And yes, a shout out to all the people behind
the scenes, whether it be my Chief Operating Officer Christopher Johnson, my chief marketing
officer, Alonzo, whether it be Leo, who's back there, whether it be child who's back
there, whether it be Tyrone or Eddie or Anson, there's lots of people here, there's some
volunteers like even Doyle, who always thrilled to have here. Thank you all for all your participation.
And I know I'm missing people. AJ is one of them really great guy who's also helping. So let's talk a little bit about where you're
going to store your data. And database, right. Realistically speaking, you're going to have
three options. Option one is going to be you put it on provision IPs, which is AWS fastest,
which is still very slow storage, as I showed you compared to other storages yesterday.
Now your next option is general purpose, SSD, which is again, much slower than that. And
your last option is magnetic storage. So realistically speaking, if it matters, you're going to be
provisioned IO PS, you may get away with general purpose SSD on really small environments,
but you're probably not going to use magnetic storage for any kind of database. The latency,
okay. There's a cup, there's one more database I want to talk about. Before we get on. We
get onto this, and it's the quantum ledger database. Amazon has a quantum ledger database
that is fully managed and serverless. That automatically scales with application being
serverless that eliminates the need or worry to process server capacity uses tables and
indexes to query stored historical data. And unlike traditional databases, it's with not
immutable record, keeping audit logs like relational databases. AWS quantum letter does
not permit an update or delete operation. So if you've got something that's a database,
in a highly regulated industry, you can use the quantum ledger database. But again, there's
other industry databases that do this, that are not proprietary, so we recommend those.
Now, let's talk about database optimization. And we're gonna have Amazon database optimizations,
which is what we're going to talk about, these are the same things you would do with any
database, but either way, we'll talk about backups, automated backups, database snapshots
and encryption. You know, anytime we're gonna be dealing with optimizing our databases,
we're going to be talking about scalability. We'll talk about read replicas, we'll talk
about caching queuing, multi AZ, and realistically speaking, think multi cloud, which is not
part of any AWS certification, because it's part of reality backup. If it matters, you
kind of back it up right. Now, if you use one of the AWS managed database services,
like MySQL, or Oracle, they do some really good things for you. I like full control,
but you know, I have no problem using the Oracle managed database service on AWS. What
they do is they backup your data in a very great way. The entire database is backed up
on to an image. And you can retain this backup from one day to 35 days. And the backups happen
at the same time each day, for the most part to find window, which is really great. And
during the backup, you kind of got to know because it's pulling data off of the drive,
the database may be temporary unavailable. And when you're pulling the data off of the
drive to back it up the performance of the database, if it's not available, maybe severely
degraded, kind of keep this in the back. So when it gets backed up, it's going to be in
the form of the DB snapshot. Now you can also make your own snapshots. You know me I love
control. And DB snapshots are a point in time copy of the entire storage file, like that
old fashioned ghost image. That's 100%. It's got the operating system, all the patches
you put in there all the dependencies in there or your applications, and of course, all your
data so you can backup the whole thing, which is really cool. And you can relaunch that
thing instantly, should you have a problem, another region, another availability zone,
etc. And when you make a DB snapshot, it's available until you delete it. So what does it really look like? You've got
the relational database, and you can create a snapshot of it. Now, when you restore a database from a snapshot,
you're gonna get an identical new virtual machine with one exception, it's going to
come up with a new IP address, which means the old DNS name that you have is no longer
going to work. It's going to have a new DNS address too. So you may have to update your
DNS records if your systems use DNS to point to the new system. And the people typically
use DNS. And if you use the IP address of this system, by comparison, we don't recommend
for lots of reasons, then you're going to have to update the IP address mapping and
your application servers, which are going to the database. And when you restore it,
you take your snapshot image, and poof, you've got a new database with a new IP address,
just like I described. Add a new DNS drop. know, if you're going to store your data on
a database, or any hard drive in the hard drive lost on your identity is compromised.
Because what people can read your information. Do you want that? No. So Amazon supports encryption
at rest for all your database. So what does this really mean in all practical terms, it
means all the data stored on your server is encrypted. Effectively, what's going on is
the EBS volume of the block store to the virtual hard drive is encrypted. This is enabled by
enabling the Key Management Service, which makes it really easy to control the keys,
we talked about the key management service and when these kind of a lot of these things
yesterday. AWS also supports transparent data encryption. And Transparent Data Encryption
is typically used to be with Oracle and Microsoft SQL databases by default. And you can set
up transparent data encryption with the cloud HSM module hardware. It's like a hardware
key encryption kind of manual. We'll talk more about that later. And it's transparent
data encryption is really kind of cool. In encrypts the data on demand and decrypt the
data on demand. So when you store the data with transparent and encryption, it is encrypted.
When you pull the data is decrypted. And the cloud HSM is a hardware device for storage
and management of your encryption keys. AWS also supports encryption and transit.
What does this mean? It means that your data is sent as an encrypted encrypted on the way
to the database to be stored. And how does this work? Well, basically, it uses the TLS
protocol and SSL certificates. And you use a certificate to basically assist with the
authentication of the endpoints and your data. Basically, in the same way, when you go to
a website, I use it a little lock. And it's using SSL based encryption. That's really
what we're talking about. Now, databases have become really mission critical, mission critical
applications. So how do we improve the scalability of these things? Well, the simplest method
is to scale up, meaning we're in a server that's got eight cores and 32 gigs of RAM.
And we bump it up to a 64 cursor server with a terabyte of RAM, or 192 course server with
six terabytes around and we pick but at some point, I promise you, you're gonna run out
of capacity, no matter what you do, will run out. So when you run out of capacity, you're
going to have to add capacity. Now with some databases, like Apache Cassandra, for example,
you can write all databases as the same time I'm 90% sure Oracle database allows that as
well. But most do not. So we'll talk about how we're going To deal with this because
it's not like you can just auto scale a database the same way you can auto scale a web server.
Lots of trickiness here. And at some point, you just, you're gonna have to start getting
creative. No with no SQL databases. This is simple. Apache Cassandra, you just add servers
and it writes to them all at the same time. With Dynamo DB which sock and we're not recommending
you can basically partition the database and it chops or shards the database into partitions,
and the application will have the intelligence to route your traffic to the correct shard.
No, now with relational databases, it gets a little more complicated. What we do is we
add read replicas. Okay, what's a read replica? A read replica is a read only copy of the
data. Except for Maria DB as well. Now read only so what does this really mean? Right
now I'll give you an example. There's all the blue wrenches in this YouTube chat box.
For today, I am the primary master database, meaning I'm out here providing the information.
And many people are asking questions or things that I covered and they need a little clarification.
And they're usually the little blue wrenches, whether it be my chief operating officer Chris,
who's over in Tampa, Florida, or for Forca, who's over there in Cameroon, who's answering
questions or child who's over in Dallas, Texas, little supertall over there. And she's answering
questions. Or Alonso, who's over there in Katy, Texas answering questions or Edie over
in them in the Cameroon that's answering questions. They're kind of like read replicas, they basically
enable me to focus on what I'm teaching. And they help answer things. Kind of keep that
in the back of your mind. So the way we use read replicas is the database has something
called right capacity and read capacity in a primary database does it both. A read replica
is a as a read only copy of the instance. And what happens if we take the read load
off of the primary server and we push it on to some other servers, the primary server
can focus only on writing. And the read replicas can handle the read traffic. And remember,
if you've got a server that's going to read and write, and you can remove all the reading
and only has to write it can scale further. And why are we doing this because as I mentioned
previously, there's only going to be a certain amount of server cores and D RAM and disk
performance you're going to be able to get, so we're gonna have to we're gonna have to
get past us. And that's how we're going to do it. It's going to reduce the load. So what
does it look like architecturally, here's what it looks like. Basic three tier application,
we've got our web servers, which can auto scale. We've got our app servers with who
can auto scale, we've got our main or master database. And we've got the read databases.
And what happens is you'll point your questions to the read database, and that'll free up
the right database to do more. So when you use Read, read read replicas when
there's read activity, if it's all right, activity, read replicas aren't going to do
anything. When query traffic, meaning people are trying to read read read, read read is
slowing things down. You need a read replica. If you've got four times the read capacity,
and in the right capacity, add four read replicas. Because adding extra capacity. Now while we're
at it, I want to make a clear read replicas are used for performance. They do not aid
in availability or disaster recovery for the most part. Now the next thing we're going
to talk about is database caching. What is caching? Caching is a service to take frequently
accessed information and put it in memory. Caching works by taking a request and temporarily
storing the results of the request. Now, why would we use recursion? Let's say I'm going
to we're going to use a no SQL database with caching right now. And the reason we're going
to use a no SQL database is Taylor Swift gets a brand new cat. And she posts to Instagram,
Facebook, Twitter, Tik Tok, and LinkedIn. All the new photo of her cat and somebody's
pulling the information out. Pretend the cat is stored in the database the cat photo Now the read replica can answer the cuts question.
And then the cache can keep re answering it offloading the read replicas. So the read
replicas offload the right, or the primary database, and the caching can offload the
read replica. Now of all the information if one thing is for teller, so let's count into
listings for Katy Perry's cat. Another thing is for Christmas, Cool Cat Sunny. And the
next request is for my super awesome Princess cat, Cindy. By the way, I bought her from
my wife and realized two cats the second level my level of this little cat. But you know,
that's neither here nor there, you know, then the caching is not going to help. In fact,
if you add caching in an environment, while the requests are different, it will slow things
down. Caching is used from frequently accessed information. Now, typically, speaking, organizations
have been using caches forever. There's two caches which we that are typically used in
business. For the most part, businesses use Redis caches. And they also can use Memcache
D. Now, of course, in the cloud, you could set up your own servers with your own DRAM,
and set up your own Rama Redis Cache, or your own bucket, rhoncus D cache, or you can use
the fully managed AWS cache, there's no reason not to use these things. It's pretty simple
to set up. And you can basically use a premade Redis cache. Now people use Redis, because
it's got the most robust feature set of caches to typically use. And you can manage, you
know, Redis workloads to Elastic cache, or you could set up your own cache, which in
certain cases makes sense, it's based on the business requirements. Now, if you need something
simple, simple, simple, simple, simple, you can use elastic cache for Memcache. D, its
simplicity. And elastic cache is compatible with for Memcache. D is also compatible with
memcache D. So there's, you know, these caches aren't that different from each other. But
if it matters, you're going to be using Redis. I told you how caching sort of helps, I'll
show it to you visually. If the requests keep coming in for information that's on the database,
the cache can store that in memory, and provide the answers so that you don't have to do so
of course. Now, let's talk about database killing. And if you want, I'll actually architect
these things together for you to try and put it into context for you. What is killing for
the Americans is a complicated concept for the English or anybody that follows, you know,
colonial English language. It's simple because caching means put everything in a line. You
got to a plane in the UK, and they say, Please form a queue. So killing is really a means
to schedule the delivery of your data. It's used in lots of applications and why we're
using caching test question here for the AWS Solution Architect Associate. Why do you use
the cache is to decouple the traffic destined from the database. And your application services
used for application decoupling, you may see a test question on that most likely will.
Come caching is used to decouple it. Here's the way caching really works. Caching is used
for you got a sender who's sending a message, they stick the message in the cache. When
the receiver is ready to receive the message, guess what happens? It gets pulled from the
cache, I'm sorry, the cue, keep going to cash that's a cue. And then it's removed from the
queue. So I want you to think about this. If I'm sending messages as fast as I can,
and the receiver is not ready, they're going to be lost and dropped. But in this particular
environment, in this particular environment, I can dump all those messages into the queue.
And by dumping it into the queue, it gets a holding pattern. So I got a million messages,
I stick them in the queue. And when the system is ready, it will drain them. So caching promotes
scalability. It enables you to root CPU sizing and I'll show you why. And generally speaking,
the killing lowers your cost I'm sorry, keep calling, capturing and referring to queuing.
Now, most businesses use Apache Kafka as a cue. And you can set up a virtual machine
and use Apache Kafka on all your clouds at the same time and use your same beautiful
cue Apache Kafka as a queueing system. Or you could use the proprietary which I don't
recommend AWS SQS or simple queue service. And by doing so, it's a pre managed queue
for you. But if you're going to use three cloud Have, you're gonna have three different
proprietary queueing systems might not be the simplest, most elegant thing. But if you
use Apache Kafka, which works on all clouds and all data centers, at the same time, we're
gonna simplify your thing. So again, SQS is another service that, you know, I wouldn't
be architecting a day anything, unless it didn't matter. Because everything I do is
multicolor, just like 98%. So when it comes down to it, there's two options, there's a
simple step, there's a standard version, which is a simple queue. Basically, messages come
in and out as fast as they can, there's no guarantee of the order of the messages. And
if you need to guarantee the message delivery, what you could choose to do is you could set
up a FIFO queue or a first in first out queue, message, one goes, then message two goes in
the history, but it's going to slow it down, slow it down and slow it down. Why? Because
what if message one was 1500 bytes. Message two, two was, I'm sorry, was 1500 bytes. But
message three, four, and five were 64 bytes. The three, four and five is going to get there
most likely before message two. And when you set up first in first out, you're going to
slow the system down, but it's up to you. Some require some application and business
requirements required to do so. So the reason we're using these killing systems are as follows.
This, let's pretend we're looking at the CPU performance of a database. In your typical
333 tier environment, I use proprietary technologies here, just because it's an AWS class, when
I do these pictures, you got your web server coming in. You got your app server coming
in, its uses this proprietary queue, and then it gets stuck into the database, proprietary
DynamoDB. Now, if this was a regular database, which we're going to see as the CPU is going
to go up here is nothing, it's going to spike, it's going to go down CPU is going to spike
and it's going to go to nothing. But by using a killing system, we can smooth out. And by
using killing effectively, what we're doing is is realistically speaking, we're smoothing
it out. So that way we pop stuff in the queue, and we take it on a consistent basis, it's
kind of like driving a 40 miles an hour in your car, versus two miles an hour, 100 miles
an hour, two miles an hour, 100 miles an hour, which do you think is going to be more efficient
on fuel your cars longevity. That's why we use the cueing. So killing helps remove read
content, and I'm sorry, write contention that keeps messages from being lost. And what else
I'll also call is you can use the depth of the messages in the queue to trigger auto
scaling of different servers, which we love. So when should you use a queue when you want
to increase the scalability because there's a lot of write requests. So caching, reduces
read load, read replicas, take the read load off of the primary database server. Killing
reduces the right road, the right load. And it keeps you from losing critical messages.
So let me whiteboard this out. So you can kind of see how he would scale the database.
So wake up, everybody. Wake up, wake up, wake up. This is this is bonus content. This is
actual architecture information, which goes way above anything that's covered in the course.
So it's bonus bonus bonus. So make sure you're awake. So what does this really look like?
If you're, let's say here, is a web server or group of servers. I go, I think in this
direction. Some people think in the other direction both work depends on which country
you're from. Let's say you got your web servers, and you got your app servers. On the way into
the database, we put a kill. And Mike, we don't see your
thing. Oh, thank you for that. So we set up a web
server and app server and then we put a queue here. This could be SQS. It could be Apache
Kafka, it really doesn't matter. We're going to use queue by doing using the queue, we can increase
the scalability on the way into the primary database. And I'm going to show this for it's to make
life simple. And this is going to reduce the right load Because web messages coming from
the web server, the app server, instead of all being thrown to the database and potentially
lost, we're going to add a queue. Now if we want to increase the capacity of these other
right, the primary database reads, writes, reads writes. So if we want to offload all
the read work from the database over here, we're going to we're going to add some read
replicas. And now we're going to point all the traffic
to the read replicas. Now the read replicas are doing the answers. But what if we want
to reduce the load on the read replicas? Well, realistically speaking, we could stick a cache
here. And the cache where I'm putting it here architecturally is debatable, I'm using it
for simplicity and elegance to make it clear, what we're using it for the cache can reduce
the load on the read replicas that are doing all the rework. So you know, that's the why
we're using this. The cue reduces the right load on the cache and the read replicas reduce
the read load on the system. think now's a good natural time to pause and
answer some questions because I want to make sure everybody has a great learning. All right, give me a few seconds to find Sure,
the questions. Can you speak how this integrates to an instance
high levels for like instance, must because I don't know what that means it's missing.
Cloud, I'd love to answer your question. But it's missing. What's missing the question
you say how this integrates how that integrates? If you can help me, I'll be thrilled to answer.
You may have asked it at the time during the presentation. But you know, I don't know what
I was saying at the exact time. Chris, let's go to the next question. And I'm thrilled
to answer that if cloud provides the context. Any options to my to mitigate the greater
performance during a backup? Nope. It's just part of the system the way AWS has designed
it. Can you pull out your data of a proprietary
system? Lady Godiva? Absolutely. How difficult is it? Well, anytime you. Anytime you put
stuff into a proprietary system, you pull it out, there's going to be challenges. There's
going to be cost there's going to be development cost to kind of do these things. But yes,
you can do it. Will information be lost in the process? Yes. Is it going to be a perfect
migration? The answer is no. So it's best just to avoid proprietary in the first place.
What's the difference between a web server and an app server? You go to www.go Cloud
careers.com. You got a web server. Now when you sign in, as a student of our cloud architect
career development, which teaches you how to be a real architect and gets you hired
as an architect and changes your life forever, and gives you a great salary, our content
is housed in the application server that defines the logic Are you allowed to get in? What
are your permissions? What are your rights? That's the application where's the content
delivering store? That's an application server. And then of course, your information would
be in a database so you I like to, I like to call it into the this layer, the web server
presents it to you. Your business logic runs on the application server, and the database
stores your information. I hope that helps, because it's a great question. So I'm guessing that this comment that Clyde
just put in is clarification. How does the up and up because you asked about clarification
for what is this, I still don't know what that means code. Does
anybody on my team could know what that means. So how this integrates to an instance, they're
asking about how an app integrates with an instance, cloud, I really don't know what you mean.
But if you're asking about how an app works with a virtual machine, then what we're talking
about is somebody who writes code and the code sits on the operating system. Just like
anything that you would on your computer. How does the app talk to a database over an
application programming interface? Gotta hope Okay, that's how you that's that's
the kind of API's Okay, okay. Does Apache Kafka have the same level of compatibility
integrated with other services such as SQS Apache Kafka is the industry standard, and
it's used everywhere. It's nothing to set up and use. Everybody uses it on all clouds
and all data center. Difference between standard queue and first
in first out? Yes, standard queue, stuff goes in the queue and as fast as it can be drained,
it gets drained with no regards to message delivery. First In First Out, number of men
message goes in and number one message goes out number two, message goes in number two,
message go out. Number three, message goes in number three message goes on. Can the apps be accessed across to any apps
can be accessed anywhere in the world, across any cloud that you want, as long as you've
got IP connectivity, and you set it up right? I can't take too many unrelated I can't really
take unrelated questions, but I will do do again the difference between a DevOps engineer
and a cloud architect and please join us tomorrow. For the how to get your first cloud job on
our Cloud Architect is a business executive. I'm gonna say it again, it is a business executive
who is at least 5050 to 80% business and 20 to 50%. Tech, that designs presents and sells
the solution. We never touch the data. We are not a lot of code, we are not a lot of
Configure. We're not a lot of types of systems. Ever. A DevOps engineer is a software engineer
first, who then automate software release cycles. In order to make software development
more agile DevOps and architecture have nothing to do with each other. They're kind of like
a parrot and alliances are that different from each other. kind of fight FIFO close
lose data. Any queue can lose data if the queue crashes. But as a rule, it doesn't matter
whether it's FIFO. Or, or first in first out unless the application or the database for
for some reason requires ordered delivery of messages. There's a load balanced and needed
for an app server sometimes. If you need more than one server for redundancy, or performance
purposes, which is generally Yes, you definitely might want to use a load balancer before we
wrap this over. Okay, going back to the content, um, we're
going to talk about extraction, translation and loading tools. I'm not a database professional.
I'm an architect and have been for decades but it, there's times where you need to get
data out of one database and put it into another database. And what we use for this is extraction,
translation and loading tools. AWS has their own, it's called Glue, which is okay, but
it's proprietary. And there are other industrial grade ETL tools that you can use across all
your columns. And if you've got a lot of databases, because each database has its own strengths
and weaknesses, and you want to pull information from one database to another database, you
will use an extraction, translation and loading tool. And Amazon has their own proprietary
branded one called Amazon glue. I don't really use proprietary anything when I don't have
to, but they have one. It's fully managed server list tool. And you know, according
to Amazon, it's real, real, real simple. You just point, your data, the glue to your data,
it'll automatically take care of everything you needed to worry about. According to AWS,
that's what they're going to tell you. It discovers the data and stores the metadata
in the catalog. And after the data is cataloged. It's searchable, and variable. The data can
be queried or stuck into another database. You know, what are we really talking about
from an architecture perspective to see what it looks like? There we go. We've got data,
for example, an object storage. And then it's going to stick it anywhere we need in which
database we want. Athena redshift, EMR again, map reduction is basically to take stuff in
and out of one, again, Python Spark, not Amazon EMR in most cases. And then you could visualize
the data with quick side Power BI tabular or some other data visualization tool. Now,
if, again, if you wanted to take your data from your own database, and then use one of
these proprietary AWS databases, you could use something called the schema conversion
tool. And what that does, is it takes your data from your database and massages it so
it sort of fits into an AWS proprietary database. Now, no schema conversion is going to be perfect.
So if you use this, it's going to take some work and development work from your database
team. But you can use those. And basically, it helps you migrate your database to a format
that's compatible with your target database. And if you started out with a really bad database
choice, and you outgrow it, what you're talking about is, this is a great way to get your
information into a better database, like Oracle, for example, or Maria dB, or some non proprietary
database, where you could use this to move it into a proprietary database, such as Amazon,
Aurora. But now you're on a stuck on a vendor proprietary system. But if you're already
on an Oracle database, there's no reason to use this tool in the first place. It helps
you get where you're looking for. So a schema conversion tool is really used to migrate
between a heterogenous database to convert the schema. And it basically helps with warehouse
application codes and SQL procedures, it's a nice tool to help you actually get to your
goals. And use this type of a tool to move from one database to another. Now let's talk
a little bit about high availability database design, at least as it pertains to your certification.
In our cloud architect career development program, we get much much much deeper because
we have the time we spend between five and 700 hours training a cloud architect, because
that's really what it takes to get to get your first job as an architect, we've got
15 hours here, or less. So we've got to focus on what we can actually teach in this period
of time, which is the exam. And I'm throwing in as many bonus nuggets of wisdom as I possibly
can for you because I want you to have the best. So assuming a single cloud is highly
available, which we don't believe in, but assuming you're gonna put it in a single cloud
per your exam. You know, we're going to talk about how they recommend you design a high
availability database on AWS. As a reminder from yesterday, AWS designs are things in
the regions and availability zones, or regions, a large geographic area, such as parts of
Europe, or half of the US continent by comparison. What we're talking about here You guys all
still hear me? Chris, can you still see me? Because my YouTube that I'm using for monitor
just went Yes. Okay. Okay. So according to AWS, if you want high availability, you can
put your databases into two availability zones in two different data centers. And if one
data center fails, you got a backup data center. Of course, if the cloud providers network
fails, if they get hacked, or the control plane goes down, you lose everything. But
this is an AWS certification exam. So you got to know the AWS principles. So I'm going
to tell you that a high availability database design uses a multi AZ environment. So what
happens is the database copies itself into another availability zone or data center.
Now, by keeping a copy of your database in another availability zone, you do not get
increased performance. It just has the information and it synchronizes it all the time. And should
your primary database go down the one or your data center go down the backup in the other
data center, otherwise known as availability zone will take over. So you copy a message
goes to the database send you the cat, but some new cat food transaction stored in the
database that gets copied to the database in the next availability zone. Kind of keep
that in the back of your mind, and you're good to go. And it's copied synchronously.
What does this look like architecturally, you've got the same thing in two different
data centers, your same web servers, app servers and database servers. And what happens as
your data is copied to the master, and it gets copied in availability, Zone A is copied
to the standby and availability zone date. So that's really what we're talking about
with regards to creating a high availability, if you want to call it that database architecture
in a single cloud. So let's say you've got your database and availability zone one and
your database and availability zone two. What's going to cause your database to failover from
datacenter, one to datacenter. Two, well, if the primary database fails, meaning the
server, guess what it's gonna shift to availability zone two, if the entire data center goes out,
like a power failure, because both power companies, and both generators and backup generators
and batteries backup style, then poof, your information is gonna go to the next one, where
likely you have a network outage in the data center. If you change the database, enter
the database instance, service type, poof, it's going to fell over. If you want to do
maintenance, like patch, or upgrade the database. It'll fail over to the backup while you're
doing the maintenance. Or if you issue a manual failover I'm going to reboot this reboot with
failover you reboot it and the backup one takes over and that way you don't lose anything.
Okay, we are now going to get to my favorite content, which is networking. But I'm going
to pause for five minutes now to make sure that we don't have any other questions related
to the database. Now we're gonna get to the fun stuff, which is networking, which we're
gonna get a little geeky. I'm sure. That's okay. It's my favorite. All right, give me just a second here. caught me off guard with the shorts short
slot there. Yeah. All right, here we go. Can the same load balancer be used for the
app server and web server in the cloud there? Your app server and web servers are going
to be on different subnets you're going to use a different load balancer. The database queue you mentioned is it something
that comes with as part of the AWS database feature or it's an architectural concept?
No, it's a both an architectural concept and it's a server concept. So either either you
know you you use a sword over like a Kafka queue, or use the SQS service, but it is an
actual functionality that is an architectural concept and a compute concept at the same
time. So, one would use the schema conversion tool, if I want to go from my datacenters,
Oracle server to Azure, Amazon Aurora, which I would never do, then I would use the schema
conversion tool, whatever use glue, no. But for your exam, if I want to pull data from
one of my databases like redshift, and pop it into Aurora, I might use glue because it
kind of normalize your data. And that's a great question, sir. So you're saying that ensuring high availability
multiple easy's I don't call that high availability, but it is port carding exam, high availability
in multiple regions? Guess what? Still not high availability. Because you've got a single
cloud, one hacker comes in and knocks down the cloud. And guess what, you lost everything.
There's a big BGP problem on the AWS cloud like or like the Google Cloud was taken down
from a BGP problem or Facebook was taken down like a whole day from a BGP problem where
at least half a day or eight hours, something crazy like that, poof, whole cloud goes down.
So I don't consider a single cloud ever a high availability system? I'm just telling
you, what do multiple reasons do. It puts your stuff halfway in the US and halfway and
Europe. But if the cloud goes down, and guess what you're still done. So that's not really
a high availability, performance environment. Is a web server the same thing as a client
server know, a client server could be a web server or an application server, a file server.
A web server is a type of server that serves web pages. Okay, we'll get back to this. So Chris, or
Alonzo, when are you guys in the chatbox? If you can help me, or when there was a song
when I was younger, and it says y'all having a good time, then they go t t, t, t, t, t,
t, I don't remember the song. But if you're all having a good time, you know, let's get
back to this, please give me a hashtag that says AWS Solutions Architect Associate. So
I know you're awake, alert and oriented, and blocked out to high availability is never
in a single cloud. It's like putting all your money in a single basket, and then hoping
that you don't get stolen from or the basket doesn't catch fire or you don't drop it or
you don't lose it or nobody breaks into you. So no, central cloud is never high availability.
So we're gonna now get into some basic, basic, basic networking. And again, if you guys didn't
hear the hashtag, if you guys can put hashtag AWS solution, Architect Associate, we're not
fans of any kind of acronyms around here. Kind of keep that in the back of your mind.
So let's do a basic networking review. No basic, anything would start without the OSI
model. The OSI or the open systems interconnect model is a model that network engineers, network
architects, cloud engineers, cloud architects, and anybody who works in tech needs to know
why. Because, you know, you've heard me be very unhappy with AWS marketing terms or Azure
marketing firms or Google Marketing teams. Why? Because they complexity. Imagine a doctor
trying to work in an environment where the same pill has 50 different names. How many
people will die when Doctor one and Doctor two and Doctor three wouldn't even know they're
on the same patients in the same medications people just die. In it's anything that matters
depends upon clear language. And we must speak the same terminology. If we want to be serious.
If I walk into a Chief Information Officers office and talk about s3 and EC two, I will
be fired my replaceable thank me for the nice job and I will be escorted out of the room
by security because the Chief Information Officer is not going to understand the gibberish
that I'm talking about. Now if I talk to the chief information officer about his virtual
machines and his object storage, now we're on the same page. So the OSI model is standard
language that everyone uses in the networking world. It's a means to get rid of garbage
marketing term, so we always always can communicate. If I speak to somebody in Tel Aviv, guess
what? And I talked about a layer one problem in networking He or she is going to No, no,
if I speak to Tyrone in South Africa, guess what, and I say we've got a layer two problem
Tyrone's going to know what it is. And we must have precision language. And we're dealing
with precision anything. The network is the heart of the cloud. It's not software is the
network, network goes down, cloud goes down. And that's why organizations typically don't
think of the network and those that don't think of the network pay a big price in terms
of outages. And anybody that knows, networking knows you can't use a single service provider.
Because we've never been allowed to for decades. So let's talk about it. And I'm going to whiteboard
it out for you because I want you to see it. I'll share my screen here. layer one, the
physical cable between you and your and your switch, or your router wire, whether it be
fiber optic. If so, you know on a wire, we're sending electrons, technically, on fiber,
we're sending light. So layer one that we're talking about is the physical layer, cable
cable. Layer two is the data link layer. That's the actual hardware we're using. It's your
Wi Fi card. It's your Ethernet card, hardware, hardware, whether it was a serial interface
for a wine interface or an ISDN interface, or an Ethernet card. So layer one, why are
layer two physical card hard coded address, hardware address. If you go to your computer
it has a MAC address, layer two address physical wire, layer two data link hardware. Now next,
we'll move up to the network layer where we've got a logical address your IP address 192168
1.3 That is an IP address. layer one wire, layer two hardware card layer three logical
addressing. You can't really change the MAC address on your computer, but you can change
its IP address now let's say we're talking about transport. The next layer four is transport.
Do I send my data in a reliable fashion meaning TCP? IP? Am I sending UDP traffic for a real
time traffic such as voice or video? Or am I sending a test message like a ping to a
Windows computer? Ping Alonso's computer like an ICMP echo when he sends me an ICMP reply
that you've got. Now realistically, when it comes to network engineers and network architects,
that's it. layer one wire, layer two card, layer three logical address or IP address
layer four TCP, UDP or ICMP. We're going to cover the rest. Now at layer five or the session
layer, we're dealing with something called the socket which really controls the connection.
At layer six, we're talking about presentation of data. But there is some networking that
occurs here. Encryption, for example, occurs at layer six. And the applications are what
you use you go to your web browser that is a layer seven application, think HTTP DNS
sec. So layer one wire, layer two card, layer three, logical address layer four protocol,
TCP, UDP, ICMP, layer five, session, layer six, presentation and encryption. layer seven
is the application itself. Now when it comes to networking, whether it's in the data center,
whether it's in the cloud, it's completely irrelevant. Everything needs an address. Why
do you need to address you need to be able to communicate with the system? Let's say
for example, I wanted to send Eva do IKEA a letter and thank her for some of the really
great blogs that she collaborated with me on. By the way, on our website, there's some
really great cloud architect interview question blogs, you could do I could work very closely
with me on that. And she wrote a beautiful archinaut article on on edge computing, I
recommend everybody read. So how are you going to know how to find it? Well, you got to know
to go to www.co co careers.com. Otherwise, you're not gonna see these great articles.
If I want you to send a letter to my mother, I need to know her address. How else would
the post office know how to send my letter there? Well, when it comes to message delivering
and computers, we need an IP address. It's basically a no different than the address
in your house. And every address on your network must be unique. How will the mail system to
work 123 Main Street I have a 123 Main Street in Philadelphia, New Town Bucks County, Ben
Salem Bucks County Levittown Bucks County. Every city in the world has like a 123 Main
Street, okay. But what's different, that makes that unique? The postal code or the zip thing,
same thing with IP addressing every device needs a system that's going to talk to each
other, which must be unique. Now, inside of your system, you can use private addresses,
but they must be unique inside of your organization. And anything in the external internet also
needs to be unique. Now when we deal with IP addressing, we're going to deal with two
versions ipv4 and ipv6. ipv4 is the 32 bit address that we've been using for as long
as I can remember, I've been using since the late 80s. And ipv6 was actually invented a
long time ago, but we're starting to use ipv6 addresses, the world still hasn't even adopted
3040 years later. So when you pop addresses for your VPC, inside
of your VPC are going to use private addresses. If you want to know Private Addressing, you
truly want to understand it, I recommend you read and you should all read this, the Internet
Engineering Task Force RFC Request for Comments 1918 that as the specification for IP addresses,
that all network Architects like me. And inside of that they specified it internally. Because
we don't have enough IP addresses. Organizations should use the 10 dot zero slash eight address
base, the 172 16 dot 0.0, all the way to 172 31 dot 0.0 slash 16, which can also be summarized
or aggregated into 172 16 dot 0.0, slash 12. And the 192 168 dot 0.0 Slash 16 address base.
These are private IP addresses to be used inside your organization. And they are not
globally relevant. I'm going to mention this right now just because we're going to talk
about classless inter domain routing for a little bit. We used to have these things 3040
years ago called IP classes. And that was basically meaning where every single network
used a specified subnet mask. So let's say we had the one dot 0.0 slash eight network.
We had 16 million addresses on that address. But here's the problem. Every card on our
router needs to be on a different subnet. So if we use four different slash eights,
or classic addresses, we'd be burning through 64 million IP addresses. Now the Class B address
base had a slash 16 and was from the 128 dot 0.0 all the way to 190 1.255255255. Now guess
what? If you do that each subnet uses 65,534 addresses with 535. I think it's 534. And
that would basically if so if you had four different subnets, or routers on a card, you
would actually burn through 102 260,000 addresses again, it would be ridiculous. Now a Class
C address, which had a slash 24, which was in 190 2.0 dot zeros all the way to the 223
to 255255255255 Slash 24 would be 254 addresses. And of course, there's the Class D address
spaces were used for IP multicast. And that's the TT 4.0 dot 0.0 and the 239 dot 255255
2.5. Nobody uses IP classes. We're using classless inter domain routing since then that really
means just subnetting. Modern Times classes addresses and routers are going to build a
map of the network. And what happens is matter. Routers are going to have a table in them.
And they're going to say to reach the 192 168 dot 1.0 slash 32. Take interface X to
reach the 192 168 1.4 subnet takes interface y to reach the 192 168 1.8 slash 32. So slash
30 subnet reach zero slash 30 subnets. I'm giving you right now take interface Zed. And
that's what we're talking about. And that's what we're talking about is subnetting because
we have to optimize our IP addresses space and we can't waste it. Every interface needs
to be there. So let's Let's say for example, we used a single class C IP address 192 168
1.0 slash 24, which gives us 254 hosts. And we had one subnet, which was the 192 168 1.0,
slash 28. And then the next subnet would be 192 168 dot 1.16, slash 28. And then the next
subnet would be 192 161 68.1 dot 32 slash 28. And I did a free subnetting webinar, somebody
from my team posts the free subnetting webinar inside of this chat box to help people get
to their goals. Because I can cover the four hours of that webinar over here. But I'll
also have a couple of examples. Here, I decided to create slash 28. So that same IP address
that I showed you, and you can see the different subnets. Actually, do you want me to do a
subnetting webinar, if you want me to do a subnetting webinar, do two things type hashtag
subnetting webinar? And also Chris will do a poll to see if we got enough of you, we'll
probably do something anyway. Yeah, I'll put a poll in the chat box for
everybody if you want to subnetting web. So let's see how many people pull it and how
many hashtags subnetting webinars we got because you want it I'll do it don't know when will fit in the schedule,
but I'll find a way to do it. And as you can see, I submitted the slash
28 into multiple smaller subnets. Now if subnetting is taking a big network
and chopping it down to little networks, of course, we got to do the opposite, right?
We've got to be able to take multiple small networks and bring it into a single big network.
Why would we do that? Let's go back to what I just showed you see how we have all these
slash 28 subnets. Now if you've got a direct connection to AWS and to your VPC, you can
only give them 100 routes. Could you imagine, you know, we've already using all these routes
over here. But we can summarize that into a single route. Because when we only have
100 routes, we gotta get real, real crap creative. So super netting, which is done for route
summarization to reduce the memory load on the routers, and the CPU load on the routers
is the exact opposite of subnetting. And it is absolutely critical. Anytime we're dealing
with those, you're only going to hear this here. You won't realistically, you'll barely
see it on AWS advanced networking, because it's so basic, it's not even worth your time.
But you need to know how to do this for an architect need to know this because if they
get this wrong, the whole system falls apart. So Super NES take many small subnets and combine
it into a giant subnet. And it's really done for the router. So here's an example. Here
we've got 192 168 dot 0.01 92 168 dot 1.0 slash 2004 192 168 dot 2.0 Slash 24 and 192
168 dot 3.0 slash 24. And then we summarize that into 192 168 dot 0.0 slash 20. This is
classless inter domain routing, subnet down supranet up, and it's all related to your
traffic engineering that your network architects and your cloud network architects and your
cloud network engineers need to be careful of and believe me, without the cloud networking
people, everything will fall apart. Next thing we're going to deal with is ipv6 addresses.
Again, it is a new form of IP addressing new is in the last 20 to 30 years old, and people
are starting to use it. Like everything in tech. It moves super slow. We all think it
moves fast because the networking marketing vendors keep changing the name of the same
old things. But the things that I worked on in 1996 are the same things I worked on today.
Of course now it's better, faster, cheaper and more reliable, but there is not. So ipv6
addresses are just a newer form. And every interface, as I mentioned previously is assigned
an ipv6 Global Address. Where do we typically use ipv6 addresses and mobile phones? Now
have IP Six addresses with a 32 bit binary meaning 01 for 32 bits. ipv6 uses 128 bit
hexadecimal address, binary 01 hexadecimal 0123456789 Alpha Bravo, Charlie, Delta Echo
Foxtrot. So we've got 16 vs two, so 16 to the 120/8 power versus two to the 32nd power.
See the difference here are talking scalability, hugest. Chris, the next thing that we want
to cover is the virtual private cloud otherwise known as the virtual private data center.
Where are my time? Was it gone for about 20 minutes. Okay, let me take a few minutes of questions
before we get to VPC? Good question, what's the difference between
a high availability failover site and a disaster recovery site? A failover site is you have
a data center over here. And you've got a data center here. And if this fails, everything
goes to this one. That's high availability failover. And disaster recovery site is a
complete and total or partial backup of your systems either ready to go for failover or
in an in a manner where it's just stored and not ready to go. So for example, if I'm going
to use a physical data center, and I've got two data centers, one in New York, and one
in New Jersey, and one in physical Philadelphia, I've got three data centers in a close proximity
of a nuclear bomb, were to attack or a massive earthquake that covers that small area, or
a massive hurricane were to come. And all those environments go down. I'm done. Now,
by comparison to a high availability failover site, a disaster recovery site is typically
like 1000 plus miles away. And it's owned by different people under different people's
control. So AWS would say, Hey, you can have your stuff in US east, west, east and US West,
and you can back it up to Europe. And then when the AWS code goes down, you got nothing,
something my grandmother would call book us. For those of you that know what I mean. By
comparison, if I was just using, if I was using the AWS cloud on the Azure Cloud for
high availability, I might store all my data into Google. And that way, and all my virtual
machines into Google on that with AWS health and Archerfield, I got a backup cloud. It
will be on your exam, but you should never ever, ever, ever, ever, ever, ever do your
disaster recovery in the same cloud as your systems. Because that's like putting all your
eggs in one basket all over again. So I hope I answered your question What's the main difference between TCP and
UDP? Huge difference. TCP is reliable. So Chad, I send you a message and you say got
it, like send me a message to I sent a message to and Chad you say got it. And then said
if you got to let me speed it up, we're gonna send you messages three and four. And you
say got him and then I'm gonna send you some more. I'm gonna let you send you messages
5678 You're gonna say got it Mike. And then I'm going to send you 910 11 and 12 and you
don't respond. So I'm going to resend you 910 11 and 12 until you respond. Of course
I'm going to slow it down and listen you want message after your last one by comparison
UDP as me sending you data as fast as I can and I don't care if you receive it or not.
So if I'm going to send you something really critical that I need acknowledgement on it's
going to be sent via TCP. So TCP is typically sent for files for example now What if his
voice What if I said you my cat Cindy is beautiful. And I and you lost the word my cuts and you
got my cut Cindy is you lost is and how beautiful my cuts end beautiful. That's voice that's
like your cell phone for example or video on Netflix. It pixels up for a second and
it goes back to normal that's supposed to be TCP retransmits so now what if I said to
you, beautiful is my cat Cindy, because we lost them in the messages and we say em I'd
still be able to interpret it but you so for voice and video, it's always UDP. For reliable
transport of anything mission critical like a file, it's always done via TCP quick question. Good failover site Florence runs at the same
time as the original site, and the data should be synchronized and identical. Yes. Good thinking.
Kira and Charles, what are the limitations of ipv4, realistically speaking, the number
of IP addresses two to the 32nd power, major problem. Otherwise, it's perfect. What are
some challenges associated with transitioning from ipv4 and ipv6 Bri addressing your systems,
setting up the routing protocols and rebuilding the routing table? That doesn't sound like
much, but I'm going to tell you, cci is like me had been cleaning up people's IP addressing
scheme massive mistakes for decades, what happens is typically, they have somebody that's
like a sysadmin, or a programmer that thinks they know IP addressing. And it's not that
these people aren't smart programmers, no programming sysadmins no systems administration.
And the person that sets up the IP addressing plan needs to be the best network architect
in the entire building. Because it takes so much networking capabilities. In order to
be able to do these things. They need to be the most senior person because no IP addresses
are kind of like the roads in a city. If the roads are designed poorly traffic is traffic
jams all the time. So the readdressing as the main plan and resetting up the interior
gateway protocols such as OSPF or many systems to intermediate systems to deal with it. And
the exterior gateway protocols such as BGP to deal with a new ipv6 address family, not
that it's nothing to somebody like me, because I've been dealing with this forever. But it's
something that's going to take the help of a strong network engineer, at minimum or on
a great network architect. Good question. How do you create an IP address? I don't know
what you mean by that. How do you assign an IP address is going to typically done via
DHCP or Dynamic Host Configuration Protocol. Okay, I'm gonna get back to the content. But
if you want a subnetting webinar, please vote for the subnetting webinar. So vote for the
subnetting webinar, let me know I'm hashtag AWS Solution Architect Associate as well,
as you know, click on that vote for the subnetting webinar, we know your way. So we're gonna
you know, the AWS VPC section is pretty deep. And because we're now dealing with your virtual
private data center, which was all that a cloud is, what I'm going to do is, is cover
as much as I can today, we may go over a little bit, because I'm trying to avoid Saturday
for everybody. And I want to give you guys even if we go a little longer. So let's talk
about the components of the VPC, its routing. It's routers that connect to the Internet
called Internet gateways. We'll talk about egress only internet gateways, not instances
and not gateways. We'll talk about elastic IP addresses, VPC endpoints, VPC, peering
access control lists, specifically network access control lists, and security group. To begin, first, let's talk about routing
tables. How do you get your traffic to its destination? The routers doing? So how do
the routers do it? Typically speaking, they run a little protocol, they all talk to each
other. And they tell each other the routes, they build a map of the network. And when
it comes to routing, there's going to be two ways where you can build your routing tables.
Option one, you manually tell it. So for any of you that are like 50, or I'm not quite
50, but any of you guys that are between 40 and 65. Perhaps you remember you wanted to
go to your friend, Billy Bob's house or Julie's house or Sarah's house and you didn't know
where they lived and they gave you their address. And you picked up a physical piece of paper
map and you looked at it. And then you wrote down some paper that said I 95 for 22 miles
north. Get off at the i 95 Exit take route to 22 for three miles east. Then make a right
onto 123 Main Street at her house is 22 123 Main Street, wrote it down via paper. Guess
what? That was great. And then poof, we're trying to drive to our friend Julie's house
and the road is blocked by police officers. Now we don't know how to get there. And they'll
point you to some detour and you get lost Four hours later, you get there, you're all
frustrated, you're just came the invention of the GPS, recalculating, and they got to
the destination. So with routers, we've got two options, we can use a dynamic routing
protocol called BGP, make sure you follow me on LinkedIn, pop this thing out, I'm gonna
get you some unbelievable blidi pre training completely free. I'm gonna release it real
soon. Follow me on LinkedIn if you want to get it. And so realistically speaking, anything
that matters is going to use dynamic routing protocols. But the routers are going to build
a map. And here's what the map is going to look like. It's going to say, hey, to reach
the 172 16 dot 1.0. subnet, it's right here. I'm already on that subnet. So it's local.
To reach the 192 168 dot 0.0, subnet, use this interface. Papa Charlie X Ray 123456.
To reach the one to 168 1.0 slash 24, which is more specific than the previous one, reach
the Papa Charlie X ray 654321 interface. I want to go to the internet. Look, notice we
have what's called the default route. So all zeros, which says if you don't know where
to go, go here. Reach out Internet Gateway, India golf whiskey 123456. So kind of keep
that in the back of your mind. I use phonetic alphabets constantly. Because, again, I'm
all about precision language, a language that's used in every country in the world to make
things simple. Now, what is it looking like in the enterprise,
I'm wanting you to guys have some more knowledge. Typically speaking, we're going to have what's
called an Interior Gateway Protocol that's optimized for speed. And you know, there were
lots of internet Interior Gateway routing protocols. Over the years there was rip there
was roughly two there was IGRP, EIGRP, OSPF and immediate systems intermediate systems.
In today's world, it's either OSPF which is what most enterprises uses, or the global
Internet service providers, we use OSPF or intermediate systems, intermediate systems.
So what you'll see is, these are dynamic routing protocols, the organizations themselves internally
will run their own Interior Gateway Protocol that's optimized for speed. And we'll run
what's called an exterior gateway protocol, which is what's used to connect to internal
external entities. So Interior Gateway Protocol locally, exterior gateway protocol between
things. When an organization connects their data center to the cloud. Inside of their
data center, they're running an Interior Gateway Protocol, like OSPF. And when they connect
to the cloud provider to exchange routing information, they're using BGP, because that's
really the only exterior gateway protocol we use in modern times. So that's the way
that kind of looks. And I'm going to briefly graze over BGP, because it'll take me at least
four hours to do a BGP workshop. And I've got a really beautiful document coming for
you please follow me on LinkedIn, you don't want to miss this document. So when you're
connecting to AWS, overthinker, direct connection or potentially even a VPN, you've got to find
a way to exchange routing information, why your data center won't be able to reach them
if you don't have the routes. And the cloud won't be able to reach your users if it doesn't
have their route. So you need to get the routing work. And it's going to be via BGP, and AWS,
as well as all cloud providers support connecting to them via BGP, because it's, it's your organization's
and the cloud providers exterior to you. We use BGP because it's incredibly tunable and
highly scalable. We can do all kinds of traffic engineering, a, take this link to go here,
take this and load share back and forth BGP is amazing. It's beautiful. I've got over
10,000 hours of experience with it, and I love it. And I use it for everything as the
old network architect. test question for you might see it, you'll see you'll definitely
see it on the AWS advanced networking, which I don't recommend you take because it's too
basic. You'll likely see it on one of either the Certified Solution Architect, associate
or professional, probably the professional but you may see it on either one. BGP uses
TCP port 179. I'm gonna say it again, BGP uses TCP port 179. Why do I say this to you?
Well, chances are, you're gonna have a firewall, right? Or an access list somewhere for security.
If you need to connect and you've got a firewall between you and AWS, and you don't allow TCP
ports 179 Guess what? BGP connection will close. No traffic will get anywhere. Also
with BGP, you will require it Autumn autonomous system to identify your organization. When
you connect to BGP, you must use it with direct connections. And there's a tremendous amount
of tuning options, but document that I'm releasing will explain every last one on. AWS also supports
the community, no expert, which is great if you don't want to become transit, which is
way beyond the concept here. It's also far beyond the AWS advanced networking, we're
getting into CCIE concepts, which I can't do in the short period of time. But what it
means is this. If I tell Chris information about my routes, and Chris tells Alonso information
about my routes, Alonso can reach all of my routes through Chris, that's called transit.
If I tell Chris, Chris Do not tell Alonzo about any of these routes, Alonso can't reach
me through Chris. So the note export community means don't tell your people downstream or
upstream any routes that you've learned from this provider, and AWS supports. AWS implementation
supports very basic BGP implementation, but they do support weight local preference as
path specific as your routing information etc. And they only only will let you use 100
routes. So 100 route is nothing. I worked on networks years ago that had 20 and 30,000
routes and that's before right when Robert for small with like a couple 100 megahertz
CPUs in them, kind of put that into context, megahertz. So it's all about using the right
IP addresses. Let's talk about an Internet Gateway, what's
a gateway everybody, it's just a router. A gateway is a router. So if you want to connect
to the internet, you need a router that connects to the internet, right? So AWS calls the routers
that connect to the internet, internet gateways. Good logical term truthfully told the really
owes. An AWS will tell you there's no bandwidth constraints or performance limitations and
your Internet Gateway. There's always performance limitations, but for the most part, it's about
the speed you need. And it's just a router that connects to the internet. Here's how
you create one it's very simple. You basically go to the management console CLI and you attach
an internet gateway to your VPC, which is a virtual private data center, you create
a default route, which was that zero dot 0.0 route and you send all unknown traffic to
that Internet Gateway, you need a public IP address on the Internet Gateway. As well as
any systems that need to be reachable from the internet now, an Internet Gateway means
the system that are behind it with a public address are reachable from the internet. Internet
Gateway reachable inbound and outbound from the internet both ways. Now, what do you guys
think this means? Means hackers can find you. hackers can find you. So Internet Gateway
means you're available to be hacked. So he has the right next generation firewalls, intrusion
detection, intrusion prevention systems, etc. Set up your demilitarized zones intelligently
none of that's covered in the AWS Certified Solution Architect, associate or professional.
But if you're going to be an architect, you need to know these things. We teach it obviously
in our cloud architect career development program. So just understand, Internet Gateway
means reasonable in and out. What's it look like over here architecturally, according
to AWS looks very simple. You've got your virtual private data center,
which you can see as you've got some virtual machines. Let's say the they're all behind a load balancer
behind the router, the public address on the load balancer would have this IP address of
3.3 dot 3.3. It's a public address, which is routable to the internet, all systems behind
this which all fall in the cider range of 172 16 dot 0.0 slash 16. If they don't know
what to go, they look at the routing table. So just look at this. You can see two subnets
over here. 172 16 dot zero slash 20 for which the routing table shows us local in the upper
right hand corner, the 172 16 dot 0.0 dot 3.0 Slash 24 is also local because it's inside
of our environment. And there you go. That's your default route 0.0 dot 0.0 slash zero
which says if you don't know where to go, go to the Internet Gateway. That's all we're
talking about. Same thing we've done routers internet, own internet gateways all that ingress
which means coming in an egress which comes out, traffic comes in, you can be hacked so
secure accordingly. Now let's talk about something called an egress only Internet Gateway egress
means it allows your traffic go out, but it doesn't allow any traffic to come in. So an
egress only Internet Gateway is designed for your systems that use ipv6. And it allows
your systems to go out to the internet, maybe update their operating system, download patches,
etc. But it doesn't allow external traffic in. So it's much more secure than using Internet
Gateway. It's still be using firewalls and things behind it anyway. But keep that in
mind. Internet gateways are stateful. What does this mean? So here's the thing. I'm sitting
behind my phone, my phone just became a firewall. And I want to go to the cindy.com website
to see photos of the beautiful scenery of the cat doing cat things, jumping up, sleeping,
wherever it's needed to cut, though she has all kinds of fun per day. She looks like this,
by the way. But my phone is the firewall and I want to go to the cindy.com website. I type
www.sedar.com on my website. DNS tells me the ipv6 ipv4, ipv6 of the website and my
traffic hits my default gateway goes through my firewall and out to the internet. Now when
I stuck my traffic through the firewall, the firewall paid attention to me and it says
Mike Gibbs is going to www dot send you the cat.com. So my my request goes through the
firewall, it pierces the firewall, it goes out to the internet to the sydney.com website.
sydney.com says, here's the photo of me your requested Mike, or daddy, whatever you want
to use, it goes through the firewall, and it comes back to me. Why is her traffic allowed
through the firewall, because the firewall has a table that says Mike Gibbs went to www.sedar.com
sydney.com was answering Mike Gibbs so allow the traffic. Now, conversely, we now have
a hacker on the internet wants to get the mic get his computer that's behind the firewall,
he or she sends their traffic to the firewall to me to the firewall and the firewall says
denied. Next hacker comes in denied next hacker tries to come in denied. And why is that because
the firewall doesn't know about it, it has no state. So the state is merely tracking
that. What happens if the firewall saw me go through the connection, remember that it's
stateful. That's what stateful means. So egress only internet gateways allow your traffic
out to the internet and returning back. But it does not allow me to connectivity. And
that's typically used for your host to get back to get patches. Not instances. Well,
realistically speaking, not instance, is something that in the old days, you would have stuck
behind your Internet Gateway, which would translate your private addresses into public
addresses. Now, the reality is AWS has a better service called the NAT gateway, we'll get
to this but you might still need to create your own version of a NAT instance, even if
it's not an AWS NAT instance, Nat translates one address to another. Why might you need
to do this company a address their systems, you then 10 dot 0.0 address base. Company
A is a big, strong, powerful company and it just bought Company B and get switched up
IP addresses, company B is using 10 dot 0.0 slash eight. Now I told you your address base
needed to be unique between them. So if Company A wants to talk to Company B, and they use
the same IP addresses, they got a problem doesn't work. So we might use to use NAT or
network address translation to translate these addresses into different addresses so the
systems can talk to each other. And there's multiple forms of Nat, one to one NAT one
of many NAT static NAT dynamic NAT, Pat, I mean, most of those aren't even covered in
the AWS advanced networking, but you may need to do them. So critical information for the
cloud architect or the cloud network architect. And that's why we have training this way beyond
this, but we're focused on certifications in order to pass the exam. So in that instance,
is available as an AMI, meaning an image that you can use from AWS or you could create your
own NAT instance if you needed to do not. But if you're going to use not connecting
to analysis to connect to the internet, you have to put it in a public subnet. And basically
it's gonna have a default route to the gateway. And all your hosts will have a default route
to the will basically have a default route to The NAT instance, which will then have
a default route to the NAT gateway that's going to look like this gotcha uses in a private
subnet. They want to go to the internet there have a route that says go to the NAT instance,
the NAT instance send your traffic to the NAT gateway, which sends your traffic out
there to the internet. Now that was getting pretty complicated. So AWS decided to come
up with a simpler solution. And they came up with a NAT gateway, which is a fully managed
service. The NAT gateway connects you to the internet and translate your private addresses
into a public address. Now NAT gateways do something that's called Pat, or port address
translation is also known as NAT overload, where they translate a tremendous number of
addresses into a single address. And they do that by using an IP address plus a port
number to separate each IP address. If you want to know more about that, we've got a
free CCNA course that you can actually see on our website, my team and pop a link to
them. So what happens is a NAT as a NAT gateway is fully redundant inside of availability
zones, if you've got two availability zones, guess what you need two NAT gateways, you
put it in a public subnet, and it's got a public IP automatically assigned to it for
the life of the gateway. And basically, you just give all your systems a route to the
NAT gateway, it's kind of like a NAT instance, Internet Gateway all in one without you having
to think about the security concerns to the same degree as you would with a not a not
an Internet Gateway, which basically provides full internet access behind the NAT gateway.
So your systems are here, you attach to the NAT gateway, and it provides Internet access
and not services network address translation at the same time. Now, on your computer, if you want to connect
it to the internet, you got to plug it into the network, right, the Ethernet card on your
computer. Or you could use Wi Fi. Tip secret here. If it matters, you don't use wireless
ever, you're not going to see a data center built on wireless. They wire things out. If
you want to go see a concert, chances are most of the stuff behind the scenes is going
to be wired certain things are going to be wireless that they have no choice but most
everything's going to be wired. Why because wires are more responsible than wireless are
more reliable than wireless. So every computer needs to be plugged into the network. And
what is it used is called a network interface. So when we deal with AWS, their marketing
folks call it an elastic network interface, an elastic network interface, there's just
an Ethernet port, or a virtual ethernet port. And by default, you basically would set up,
you turn on your system and it comes with one network interface. Now there are times
where you might want to put a system on two different subnets at the same time. And you
can basically put two coasts. And if you did that two subnets for us to interest fail,
you'll see a lot of people teaching you to do this with a bastion host, which is one
of the worst security things you could potentially do. Except for not the Azure bastion host.
But the way most people do it, I'm not even going to cover that just don't make a bastion
host. We've got a video on why you shouldn't create Bastion hosts at least for most parts
like they teach in the certification. But done it intelligently it can be used. Actually,
there's no intelligent way to do it. What happens with a bastion host is you stick a
host on the internet on the public internet with two network cards, and it's got a backdoor
into your private systems. You can SSH to this thing on the internet, wide open on the
internet. And then you can backdoor into your systems but so can any hacker. But there are
business reasons. Maybe you've got users on two different subnets and maximum performance.
You don't want to route between subnets in your router that there are reasons you multihomed
thing. Or you can create a private management network and manage things over the management
network. There's there's lots of reasons you need a multihomed service. Now, if you need
a public address, like on a web server, or on a load balancer, if there's multiple web
servers behind the load balancer, we're going to public IP address you're going to need
one. So what do you think a public IP address is called? An elastic IP address because the
marketing terms of the word elastic. So what what is an elastic IP address? It's a public
address that you borrow from AWS and you keep it as long as you need it. And when you're
finished with it, you just have to return it. AWS is global addressable, and it gets
given to another user when they're ready. An elastic IP address can be a single public
address. It can be it can make could have a public address that's mapped to Many private
addresses, as with Nat overload, otherwise known as port address translation with a NAT
gateway. And we can can set it up in multiple ways. Here's what it actually looks like,
architecturally speaking. You can see you've got your systems. And the
systems basically have a public IP address. That's Chris, how long have I been talking? Not sure, actually, I just assumed you're
going to be finishing out but it's been about 20, maybe 25 minutes, maybe. What I'm going to do is I'm going to cover
endpoints. And then after endpoints, I'm going to stop it, I'll take some questions. And
I think we probably should finish from there, unless people want a little more. So let's
discuss endpoints. And planes are a way to connect things to each other. That's why it's
called then. And we're going to be dealing with two kinds of VPC endpoints. And endpoints
are used to allow your V PC to connect to another AWS service, or an another network.
We use endpoints because their performance is better, the latency is lower, and the security
and cost is better than going to the end and going to the internet. Here's an example of
an endpoint and its action. Let's say you've got your VPC, your servers there want to communicate
with object storage, otherwise known as AWS s3, there's two ways you could do it, you
could send your traffic out to the internet, and back into s3. Now, the internet's not
secure. So you'd have to encrypt your traffic. And here's the scary part, you have to pay
to send your traffic to the internet, the internet performance is slow, and not guaranteed.
And then it would come back to AWS. Or you could do the other option, which is you could
just send your traffic across the AWS network. And that's the point of the implant communication.
You have no control over the AWS network. But you, you have no control over the internet,
and you have no control over the AWS network. But AWS can control the performance of their
network, but they can't control the internet. So endpoints are gonna have lower latency
but our communications etc. So endpoints are really virtual devices that
are because they're virtual, they don't go down there. High Availability is something
the cloud doesn't go down. We're going to talk about two kinds of endpoints gateway
endpoints and interface endpoints. Gateway endpoints provide high speed access to AWS
services, like s3. And the way it works is it prints a route to the service and puts
the route on the routing table and allows private access from, say object storage to
your VPC and vice versa. When you create an endpoint for s3, what happens the prefix list
and a VPC endpoint or created the prefix list will adhere to the naming convention of a
PRI, and it's going to look like polyoma P O dash, and then you're gonna have a bunch
of whatever it needs to be things coming after that like, extra, extra, extra, extra, extra,
extra, etc placed in the routing table. And that way your routers and the VPC, the virtual
routers will know if you want to reach us three, go that way. So let's talk a little bit about securing
an endpoint, you're going to set up an endpoint policy that's going to limit resources that
are available to the endpoint. Remember, if you don't have a route to it, you can't reach
it. So only limit limit your routing information to the subnets that need it. Because if you
don't have a route, you can't reach it. So that's a great way to start with some security
right then in there quickly, cheaply and easily. Next, we'll discuss interface endpoints. Now
interface endpoints are a way to connect to different AWS services or other organizations.
So let's say EC to Systems Manager, Kinesis load balancers. Maybe you're a car manufacturer
and you make a car but you got a tire manufacturer and a batting mat, battery manufacturer, and
a steel manufacturer all on AWS, you want to connect to them directly across the AWS
cloud. That's why you're using an interface endpoint. Interface endpoints work a little
bit differently than the gateway endpoints. We describe what happens when you create an
interface endpoint and effectively creates a network interface on your VPC that's local
to your VPC and use that interface to connect to the third provider. AWS will automatically
generated DNS names. You don't have to remember the IP address and you can connect to it via
the name. Interface endpoints actually use the AWS private link service, which is a one
way like pseudo wire or virtual wire to connect to things across the AWS network. It's like
a virtual private line. And the private line creates these network interfaces that you
use, and it restricts all traffic going across the endpoint between your VPC and that service
with a customer partner. What does it look like architecturally speaking, let's say you've
got your your VPC and you want to reach a service provider VPC and the service provider
VPC is VPC to what happens you create a VPC endpoint and you can reach the system inside
of EPC Joe I'm going to stop there. Because VPC peering
is another concept and I think I've been speaking a long time and I don't want to confuse people.
Chris, are there any questions for me? I'm sure there are. All right, let's see if there are questions I'm worried we covered so much and people
are getting tired Oh, yeah, comments on cat. We love my cat Cindy. She came into our house
like a storm. I bought her from my wife, because my wife loves cats. And the next thing I realize
the cat sleeping with me following me from room to room special. How do you recognize
an ipv4 versus ipv6 address? Leo for my team did a great thing. And ipv4 address is going
to look something like what 10.0 dot 1.1 28. Whereas an ipv6 address is going to be in
hexadecimal. And it's going to look like 2002. And it's going to look like zero delta echo
seven, for example. And you'll see another call in it'll be Alpha Bravo, Charlie, Delta,
colon. And then you'll see like a 001 thing, and it'll go on for 120 minutes. Great way
to great example. Does endpoints use patch cables? Now these
are virtual things. So between every server they're going to be plugged into switches,
typically two that are going to be plugged into the routers. The routers are gonna have
cables in between each router. But the endpoints are virtual, they're gonna use the IP network
that's already been established. Good question that's. For the endpoint, they must have IP connectivity,
everything is plugged into the network, which means you may need a network just to concern, you can access your s3 bucket
from new Seatoun instance. And a private some was via a gateway endpoint. That is correct.
You could also go out and reach it through the internet if you wanted to. But that wouldn't
be efficient. Are there any implications using any IP? No,
not at all. That's your only option. When it's recycled back? Well, the implications
are, if you wanted to use a different one, you'd have to update your DNS mappings and
things like that. Kind of like when you go from one service provider or another, but
everything else would be okay. You just have to change your IP addresses and your routing.
Well, you don't really have to do so much with routing on the cloud. It's much easier
and your DNS Good question. Are there cases when you would prefer an interface?
Gateway or an interface? Yeah, you have to use it based upon when you're using. If you're
connecting external clients, it's an interface endpoint. If you're connecting to s3 or DynamoDB,
I think it's s3 Definitely think DynamoDB also uses a gateway on point everything else
uses an interface on point Okay, so I do want you to at least I want
to make sure you read the book as well. You know, in the book, we have the ability in
the time to provide even more content. And there's a relationship between the content
that we're discussing here and in the book. In the book, we're giving you much more focused
AWS content for the exams. And I'm giving you as much non AWS content as a candidate
time permitted. But I'm still making this bootcamp focus. So I want you to read the
I want you to read the book, I want you to practice the loves. And I also want you to
watch all these because I want the best for you. To end points of playing ipv6, it's kind
of a weird route that they're putting in the routing table. Remember that as a prefix less
common xx xx, so it's something slightly different. Does AWS generate the DNS? Yes, absolutely.
Free on points? Do you secure BGP routing information and prevent unauthorized changes
to the routing table? Well, you can do certain things like you can set up MD five message
authentication, which is not the greatest crisis in the world, you manually assign your
BGP peers, which is something but what you typically want to do is you typically want
to set up your BGP policy with either a distributed list, for example, or a route map, that you're
only accessing routes that the come from the subnets that you're supposed to. And that
way, you're generally there. There's some very good guidance on acceptable ways to use
BGP. But there and I actually have an article that I wrote on for hacker noon on how to
secure your BGP as well, and BGP hacking and hijacking. But kind of keep that in the back
of your mind. Yes, those are the kind of main things that you can do, it's not a lot. But
the message dot the message authentic the BGP peer authentication, the fact that you
manually define the peers. And if I know that, I'm going to receive the 172, once one, six,
dot 0.0, slash 19. I'm only going to take one route from that and not allow any other
routes to be injected into my systems. And the cloud provider would do the same question.
What is the AWS backbone, it's their high performance, high speed network. Strong questionnaire. Please download those. And most importantly,
tomorrow. There's a lot of questions on careers. And there is a massive difference between
certifications and getting hired in today's world. I want you to get hired, I want you
to earn a lot more than you dreamed possible. And that's very easy to do if you know exactly
what to learn. And most of those things are not in certifications. And all of you can
do it, regardless of your background. So please join us tomorrow on the become the ultimate
Cloud Architect webinar, not only will be present for about 30 minutes, but we will
spend an additional 90 minutes answering your questions live live. to kind of keep that
in the back of your mind. And we can do it face to face. And if you had a good time and
you're learning please hit the like button, please subscribe to our Youtube channel and
hit the notification bell so you'll be notified when we do these things. And, you know, I
it costs as much as buying a new car to put on one of these productions. And we do it
to help those that can afford training. Because I really want to help the entire world build
their best career. Please share this, tell your friends to take this course we're keeping
it live on YouTube completely free. So please send an email a tweet, make a LinkedIn post.
We put a lot of time, effort and money on this. There's about seven members of my team
working on this right now. Let alone the hundreds and hundreds of hours behind the scenes to
put something like this together. Please share this so we can help as many people as possible.
BGP will be have to be configured on both sides. BGP is only configured on routers,
not switches unless it is a layer three switch, which is really a switch router combined.
So thrilled that you're there. Sara, thank you so much I don't know what you mean by that. But anybody
can follow this course I don't care whether you take the certification or not, I care
that you get the knowledge. I've got people getting hardest caught architects every single
day. And some of them never even took a certification exam, but they're trained for the job. And
they don't need any fancy education. They don't need any experience. They needed to
read the book and watch this course again and again, each time I promise you will pick
up other things. CMS, thank you so much. I really owe me thank you so much. Lonzo another
awesome way to live and go code bootcamp, I think so Alonzo, I hope so. I want to thank
you so much. Chris, thank you. The Go Cloud Architect with
him is thrilled to help David. We're thrilled to be here for you. Karen, thank you and team
Mike and team. Thank you so much. Thank you all having a great time. Allows you to remember
that song y'all having a good time. And so put in the chat box because it's driving me
nuts. Not the pitbull version. But the original version. I like the purple version. Thanks,
Jason. Thanks. Which webinars that for tomorrow, that is the how to get your first cloud architect
job. And not only will tell you how to get your first cloud architect job, but how to
be great at it so you can have a great career. Thanks, Lady Godiva. Thanks, Jim. Thank you.
Great. Thank you. And we appreciate it so much. We love the cloud community. And I'm
thrilled you're here. AJ, thanks so much. Please hit the Like button. Come back. We
have to thank AJ for a service in the Marine Corps. He is a great guy. We're thrilled that
you're here. blocked out learned a ton. I'm so happy that makes me happy you. And Tom.
Well, thanks so much. We're thrilled you're here. Thanks, Kristen. We're happy to help.
Thank you, Collins. Peter, thank you so much. And Emanuel. We're thrilled you're here. And
some from my team. Thanks so much. Thank you, Samira, and thank you, Lady Godiva. Ajay,
we're thrilled to see her. You feel certified already. Wonderful. Here. We that's our point.
And our goal. You're more than welcome, Victor. Sure. You're so welcome. Omar, we're so happy to keep providing content.
We love doing this. Great job. And back to you and your guests. Thank you. I really do
love my cats. Actually, Chris has a beautiful cat to another educating still learning things
with ease. I'm so thrilled to hear that. And Igor, we're so happy to help. As a current student like has helped me land
a career and I'm I'm so thrilled that and your continual learning more, I'm thrilled
to know that Dino? Thrilled That's our whole point. Change lives and get people higher.
Would you suggest taking a Solution Architect first and the CCNA? I don't know what your
goals are. Please join us on the how to get your first cloud job webinar tomorrow night.
Because your goals determine what you should learn and how to do it. Sandeep, thank you.
So Sanjay, thank you so much. Please join us in that webinar tomorrow. It will be a
life changing event. So thank you all so much. I'll see you all
tomorrow in class. And please make sure you join that webinar. It is really valuable information
that I want to order. No. Have a wonderful, wonderful night.