>> Hey everyone. In today's episode, we're going to take a Zero to
Hero approach with Azure DevOps, we have a special guest
joining us today, Nana, from Techworld with Nana. Now Nana loves everything about
DevOps and loves Azure DevOps. She and I put together a
very special video for you. So please follow along. Now, we also have a lot of resources
that you can access as well. Instead of going through 100 links, we have one concise link
for you to go through. Now let's quickly walk through
how to access the documents, the repository, and more. You're going to go ahead and click this link below
akr.ms/techworldwithnana. You're going to go ahead
and open this page. You're going to enter
in all your details. Today you're going to have
all your links in one place, easy for you to see. [MUSIC]. >> Hello everyone and
welcome to the DevOps Lab. I'm April Edwards and I'm joined by a very special guests.
Welcome, Nana. >> Hi April. I'm really excited
to be on the show and really excited about the topic that
we're going to discuss today, which is going to be Azure DevOps. We're actually going to start with a brief introduction of the DevOps in general and
the cross-functional teams. We're going to go through the Azure
DevOps platform and basically see the mapping from
Azure DevOps features. We're going to go
through all the features and all the amazing stuff
with it you can do on the platform and
basically see how you can use that in DevOps
teams in practice. >> Awesome. Let's get started. For everyone out there today, that is just starting off in DevOps, I want to level set with
you and I want to first start off with how we define DevOps. Now, this is something how we
define DevOps at Microsoft. I think if I were to
ask each and every one of you how you
would define DevOps, I would get a very different answer. Some of us have a lot of
experience working in DevOps practices and some such
just starting from scratch. I want to level set here. We define DevOps as the
union of people, process, and products to enable continuous
delivery of value to end users. I want to focus on this word value. Because if you've ever seen some of Nana's videos before, you know, she talks about a lot
about the tooling, the processes and the
people side of it. Really 80 percent of
it is the people side. We tend to work in silos in
our different organizations, but we want to break those
down and we're going to use the tooling today to
help us leverage that. But more importantly, we
want to learn how to use the tooling to deliver
value to our organization. Let's talk about how
our organizations are currently carved up
that many of us work with. We had tend to have an organization where we're
trying to deliver something. Maybe as an IT pro
you're trying to keep those lights on 24/7,
no matter what. As an engineer or developer, you're trying to push new features. Those two things always conflict. It's really hard working
these silos teams because not only incentivize
very differently, but we're trying to
deliver different things. We need to come together into
product teams, into feature teams, even to deliver that value across organizations
and to our end users. We're going to talk about
a feature team today. This is going to be a
cross-functional team. What does this mean for you? A lot of us work in one space. We might work with data or APIs explicitly or maybe
the user interface. If you're working in
the operational space, maybe you work in storage or just networking and you don't care about the rest of
the stack and I get it. It's sometimes hard to care about those other parts when you're so focused on what you have to deliver. But if we could develop
into more vertical teams, we can have empathy
across that stack. That's really important to
having that DevOps mindset. Yes, you might be a UI developer, but learning to understand how
your UI impacts that data stack, or if you're working in
storage or networking, you need to understand
how that component of your infrastructure impacts
applications you're working on, and even maybe the data components. Let's dig in to Azure DevOps. >> Exactly as you mentioned April, most of the DevOps is
about communication. As it flows to the
cross-functional teams, which basically I have a
team with different roles inside which have aligned interests. Basically, you have a project, we have lots of different roles, each one having their own background and skill set that they
bring to the table, and now they have to work
together and most important, they have to collaborate. Even though I teach a lot about
DevOps tools and technologies, I also completely believe
in I'm 100 percent behind the idea that DevOps
is first the culture and communication and
basically defining what the workflows are and how
the team works offline, and then basically implementing that and taking that
online and using those technologies to
actually bring that online. Azure DevOps, and all its
features are basically there to map to those
offline workflows. Basically, once you define in your organization how your team
is going to work together, how those different roles are going to communicate
with each other, who needs to talk to who, how are the roles defined, where the boundaries, etc, now you have to provide
them with some tools and platforms to also implement
those in practice, while working on the project. All these features that you
see here that Azure DevOps provides is basically one-stop
shop like a centralized platform, where each role has their own
basically way to do their job, but also very good points of
integration between the roles. From Azure Boards till
Azure Pipelines and Repos, all of these features
basically enable the communication and we're going
to go through those one by one. The first one on the Azure
DevOps platform is Azure Boards. This is where everything starts, because before you start
developing the product, before you start coding and then
it gets tested and release, you need to define what is
it that you're developing, and more importantly, why are you developing it
in the first place? What is the business value? What is the purpose behind that? This is where it all starts. You need a place to first of all, document and formulate
those business values and make it transparent for the
whole team what we're developing. Azure Boards is the place where you basically get the
transparent overview of what we're developing and what status each feature or each bug fix or each part of the development of
the application is in. Where everyone can just look at it, and in addition to
that transparent view, can collaborate on those. Let's actually see that in action. >> Now we're going to
dig into the meat and potatoes of Azure DevOps. Nana told us about Azure Boards. What we want to do is we want to
get started with a new project. When we build out Azure DevOps, we have something
called an organization. Your organization is the top
level of what you're doing. I have an organization here, it's called CloudAEdwards, and I have built a
project called ADEDemo. This is the demo that we're
going to be working from today. As soon as I open up my project, you can see I have a
summary page here. I've a little bit
about this project. We can see what languages
my codes written in. We can see a little bit
of an introduction, and we can link here to our
readme for the project. We also have over here
some projects stats. We can look at work items
created on the right-hand side, we can also look at our
repositories with pull-request, commits, and also our pipelines.
What are our pipelines doing? Are they successful? We can also see the members of the team
that are in this project. When we start a new project, we need to configure some settings. Let's go over to the
project settings. >> Every project needs a name. A description is also probably
a pretty good place to go. In this project, I've
created an Agile process. You can also do Scrum. You can do a basic process. It changes the features that
are enabled on Azure DevOps. When you do this, we can see the project administrators
here are both myself and Nana, and we can see the services
that we've enabled. We're going to go through
each and every one of services that Nana introduced, boards, Repos, Pipelines,
Test Plans, and Artifacts. We can also look at
our team configuration in the project settings. I've only created one team for this project just to keep it simple. But when you create these teams, you want to create groups. You might have
contributors in your team. When you think about the teams
that you want to create, Think about the characters
that are involved. When we look at a DevOps product, any project where we have DevOps
engine mentality behind it, we're going to need
people like PMs and product owners and engineers. We need all these people involved
in the project so they can all have different levels of
visibility in the project. Now every customer wants to
know how is their data secured, how is this access secured? Azure DevOps is hosted on the Microsoft platform and eases
Active Directory as the backbone. So if your organization is using Active Directory for authentication, you can bring in your groups, your authentication groups
into Azure DevOps as well, and create that security
across the platform. You could also add people individually outside
of your organization. Nana and I are collaborating
on this together, so we're going to be
in the same team. The other thing that
we do is configure notifications for the project. How do you want to be notified
that something's happening? We can also talk
about pull requests, who they go to, who does
it notify different teams. If you have PM's or product owners that need to
be aware of what's going on. They can get access
to certain things. We also can configure service
hooks so we can access different services outside of Azure DevOps itself and
create subscriptions. We also can configure
our dashboards. We're going to go over those
in more detail in a bit. So first off, we're going to
talk about Azure Boards today. We need to configure our project. I've created a project, and in each project we have
something called iterations. I've called this
Iteration Audio Demo. But an iteration could be a specific epic or big
part of a project, and you call it an iteration. Then you want to set up
below it the sprints. I've called mine sprints one, two, and three, and you can also
see I've selected dates. Each of those sprints
are two weeks long, which is how our team wants to work, and every team works differently. Some teams work in one-week sprints, some teams work in
three weeks sprints, and some even working
full weeks sprints. This is a great way to create the
sprint dates that work for you. The other thing you can create in your project configuration or areas. We're going to go through
areas a little bit more in depth when creating tasks. When I create my tasks, I wanted under Azure DevOps Demo, and I have apps, and I have also a testing
capability under my apps. I've also put it in infobet to separate just the task
because suddenly and for task might be related
around Infrastructure as code and not the app directly. Now this won't affect how
the pipelines run or code. It just gives me some
tagging preferences when I'm configuring my boards, again just for some visibility. Then there's also the Team
Configuration that we can go through. Some teams don't use Epics. Some teams only use
features and stories. Some teams have
different working days. A lot of people work
Tuesday to Saturday, and they don't work
Monday to Sunday. You can configure the working days
that work best for your team. The other really cool
thing with Azure DevOps, we're not going to talk
about today is you can connect in your GitHub
to Azure Boards. Azure Boards is a really
mature project for literally tracking everything in your project
and getting that visibility. If your code is already sitting
in GitHub in a repository, you can connect that through. You can also configure all the
details for your pipelines. Things such as agent pools. What are agent pools, we'll, agents Nana is going go
into in more detail, but they're going to run
our pipelines effectively. We can do self-hosted agents, but we're going to use
Cloud-hosted runners today. We can also go down
all different settings in our pipelines and configuration. It's really good to
have a good look at these Azure setting of your project Azure default before you get too stuck in at everything
going forward. Now let's have a look
at Azure Boards. We're going to start
with the first top item. It's called work items. This is a little bit
like a messy playground. It literally shows "you-all" the work items that are in this project. This can be a little
bit overwhelming. There's a couple of things we can do to make a little bit easier to read. I currently have it selected
to see my activity. Now these are items I've created, I've inputted or I'm tagged in. I can also look at the ones that are just assigned to me potentially. I could also look to the ones
that are just assigned to Nana or recently updated, recently completed, or
even recently created. I can also create a new work item. What are work items? There are several different types. First off, we can create bugs. Bugs in our project
are any issues or any technical problems that we encounter in our project
or with the technology. We'll file something as a bug. We also have epics enabled. Epics are going to be
our big milestones and our project against
our deliverables. Then within an epic,
we have a feature or features of the things
that we're delivering within those sprint cycles. Our features are then
going to be broken down into these user stories here, and then each user
story will have a task, and I will show you more shortly. We also have the capability to
file an issue and a test case. We can create any one of
these work items from this screen that shows us all the
ones that are available to us, and then they will literally
populate right here. But like I said, this
is a little bit messy, so we're not going to
get into that just yet, we'll create Work Items shortly. The other thing we're
going to look at is how we want to configure this. This is what I really, really
love about Azure DevOps, column options, because that column option
may not work for everyone. I do want to see the ID
of a task because as we look at this further and we start deploying stuff in a few minutes. I want to know what the ID is. The ID is often allow me to use traceability end to end
in my DevOps lifecycle. I need to know the title and who these tasks are assigned
to, and all these other things. But I can add a column here. I can also delete a column. I have a ton of options here, as you're going to see
when we go through all the different options
and configuration, there are endless opportunities
to really customize this that works best for you and your organization and the
project you're working on. Now, we are using an
existing Azure Board. We've set it up, we've set
it up for this project. But let's say you're going net
new, new project groundbreaking. You can import work items. This will take us to
the query board and we can actually choose
a CSV and upload it. Now, I don't have a CSV
with existing work items, but if you're migrating from maybe another platform
or another tool, or you've used a
spreadsheet in the past. This is a great opportunity
to import them and start your boards in
a much easier way. That's Azure Work
items. In a nutshell. That's pretty much everything
you need to know in this, we can have a visibility of it, but it doesn't tend to be the areas we really,
really want to work in. The next part in Azure Boards we
are going to look at is boards. Go figure boards and boards. Azure Boards is a
Kanban style board that I've spent a lot of
my time working at. This is probably the second most
important place for me to work. It's because I can see
everything of what's a new item, what's in progress,
what's in review, and then what's being done. I have actually created
this board that suits me. I can go into the
settings and again, we can configure how we
want this board to look. Let's look at the cards. Each of those items
are called cards. We can decide if we want to
show the ID of the task, which I do because again, we use that as an
identifier in this project. I can show Story Points, and I can show tags. We're going to go over tags
in more detail shortly. I can also configure the style
of what our cards look like. We can create a styling rule. Lots of people like
colors and helps us differentiate what is going on. As you saw with our work items, some of them are different colors because they all use
a different icon. That's really great. But sometimes
as humans, we missed off. Maybe I'm looking to
state a rule for bugs. I'm going to enable this rule, and I'm going to give the card
a color, maybe bright orange. I can say in the field that if it's a bug or actually
we use what's called a tag. We can give it that value of bug, save it and close it
and see what happens. Yikes, that is bright orange. Now that's offensive to my eyes, but some people love it. But for me, if you start using a
lot of these different styles, it's really hard for me to
actually see what's going on. I'm going to go ahead
and remove that style just because I don't really love it. Because again, we
could use another one where we get blocked items, and we make it bright red. You could also potentially
choose a less bright color. Because again, if we
enable this red color, it's going to be pretty harsh. Let's see what it looks
like. There we go. We can see I've used the tag block, which we'll go over in a second, and that color just
makes it hard to mess, but not really great to
look at continuously. What I do want to
use is a tag color. Every single work item that we
create should have a tag on it. I have created one already for bugs, but I want to go ahead
and add another one. I have a tag called blocked. If we're blocked on a
task or a work item, I want to go ahead and actually, I do want to make that red. Actually, let's make it
green. Let's make a blue. We can see here, I'm
blocked on this item, and it circles the tag
blocked with the blue, and our bug over here that's
already enabled is take, this makes it really
easy to differentiate when we're seeing a
blocked item or bug. I can see this in this
Kanban-style board. >> The other thing we can
look at is annotation. We use these, again, these little visualizations to help
people identify these tasks. We can disable them if really
they don't work for us. We can also put tests into these tasks from here
and create test plans, but we're going to go
into test plans in a little bit further
detail later on. Now maybe I want to
configure my board. I told you this Kanban board
is completely customizable. For what we're doing today
I really like having a new and in progress and interview and a blocked
and a done column. But we could actually
put a column in again for bugs. You'll
see a thing today. I could go ahead and put that bug column in and I
might be able to put it, sorry, I've overwritten myself, so a new column. Here we go. Bugs. For a bug. We're going to have a bug
column. The other thing we can do is we can have a
work-in-progress limit. I can have a work-in-progress limit
of five items in this column. Why is that helpful? Because if we build
up too many bugs, we probably have other
issues to tackle. We might have too much
technical debt and problems in the application or in our project
that we need to stop and assess. This stops me from adding way
too many bugs in the project. The other thing we can do is split this column into doing and done, so we could actually
give sub columns to the big column if that's what works
for you and your organization. I'm going to go and
take the bugs out. Now, we notice we're
using swim lanes. I love swim lanes. Now I have a default
lane, just one swimlane, but I can potentially
create a new swimlane here. Fun, maybe you will call it
infrastructure so we could have maybe an infra swimlane and an app swimlane and let's
see what that looks like. It creates a little
lane on top here, and we could make this lane
below the application. Maybe anything that is
infrastructure goes into here and we can move it into the in-progress
under the infra swimlane. It's a good way to
delineate and give you more detail into what's
going on into your project. The other thing we
know is reorder cards. Every time I move one
of our work items over, it changes the column
and it actually reflects in our backlog
into the priority and we can turn that off so it's entirely up to you and your
organization what works best? Status badges, these are great. You can embed these into your
documentation, into your wiki, into the front page of your overview of your summary of your project to
know what's going on. It gives someone a quick view, even on a dashboard,
what's in progress, what's in reviewing, what's blocked, literally a quick working overview. Then the general but here covers off the working days and stuff
that we saw previously. I've showed you how to
configure this Kanban board. Now, we're working in one project, but the reality is a lot of
people work in multiple projects, so if I go up here, I can see all the teams
that I have access to. Now I only have access
to one team right now, but we can create multiple teams. Maybe you want to
divide your teams up into your infrastructure
development team or really it's good to
do application stacks or projects within a project
if that makes sense, into the different teams
that might be working or specializing on something
in that project. That's something we can do. I can also create new work
items from here. The work items tab
above was really messy, but actually we can create
a work item from here. We can create a user story,
we can create a bug. I want to go ahead and
create a new user story. I want to actually
create a change in our website and when I create that, it puts it in a state of new and it doesn't really have
much other information. Let's look at what a
user story looks like. I've put a change in website here. I'm actually going to
assign this to Nana. Nana is going to use
assessor working task today and it's going to
go in a state of new. Now we saw this area that we created
earlier in our project setup. This is going to be into our app area because we're
not really deploying any, well, we are going to
deploy some infrastructure, but this is mainly
pertaining to the app and then iteration space here we have the option of
choosing the sprints. I'm going to choose
Sprint 1 because that's the one we're currently in
with our working dates. I can also prioritize this. I want to give this
a number 1 priority and I can go ahead and
add a description. Though I've told Nana I want
you to change the website and I can also add in the
acceptance criteria. I can add that all in, she has other note as a note
and it's been assigned to her. We can also do things like add a pull request or branch
but Nana is going to go ahead and show us how to do
that later and she can link to this work item or any other work
item that pertains to her task. The other thing I can
do is add related work. I can add an existing item
that actually might already be in our repository somewhere. I can search for other items, so I can see that we've had other user stories and
other things going on. I don't actually see anything that
really pertains to what we're doing but I might actually just
need to create a new work item. I want to create a new work item
for Nana and I'm going to call it inspect current configuration. I've given her a child task. We can save and close that. We also have an analytics
tag under boards. We have a cumulative flow diagram
that gives us a full report of average work per person
that's in progress. Right now this is showing
the last 14 days. I can go back to 30 days or 60 days, or even 180 days or do a
custom rolling period. If I have more than
one swimlane enabled, like I looked at the infra one, we could enable that. I could also look at how
many items are done. I can see the work
items in progress over the last couple days really but I can set this and get
some analytics from that. I can also look at
the team velocity. Now, we already know we don't
have a velocity in our team yet, we have two work items in
this sprint that well, two that are incomplete
and one that is completed so we absolutely have a
lot more work to say. Now that we've looked at
the Kanban style board, let's go into backlogs. Why are backlogs important? Well, we want to talk about the process of DevOps
and coming together. We need to think about
the ceremonies we run as a high-performing
DevOps organization or one that just getting started. We want to look at how
we do our ceremonies, so for organizations that
don't do ceremonies, this is something you
need to think about. Whether you're running sprint
planning or backlog grooming this backlog piece is
where you'll spend your time for a lot of those
meetings and those planning goals. We need to plan what
we're going to do in the next sprint based off
how we were successful, what we achieved, and what we didn't achieve on the current sprint. Again, we have it set up here where we can see all
of the user stories, tasks, and bugs aligned
to this sprint. How do I know it's in this sprint? Because it tells me we're in
the current sprint right here. Also all the iteration
path are for these Sprint 1 and we can see where things are aligned to Sprint 2 and Sprint 3. This literally gives us our
entire backlog for the project. Again, you have the ability
here to access other teams, you can create a new
work item from here, the same as we did before. We can create new website, add it to the top, and we
can do all things with that. We can also create that
work item from here. When I create that work item
I can click on this and I can edit it because really
it's not assigned to anyone, it doesn't have a details in it. We need to think about doing that. The other thing we can do is
we can do work item templates, so that maybe if we automate
a lot of our processes, maybe when something fails, it creates a work-item, etc. We can create that
template to do that. The other thing we can do
in that work item is add an existing task or
add a new task here. The other thing we
can do is we can move this work item to an iteration. If I move it to the backlog, we see up here it's in the
audio demo iteration path, and it's actually not
assigned to a sprint, so it's not uncommon to have a lot
of items here in your backlog. I can go back and move
it to in iteration, maybe Sprint 3 because that's where it fits in what we're trying to do. You can also move the position. Maybe I want to move this down
to Position 7 because it's not really a high valued work item at the moment and we
can deal with it later. There's all things we can do. We can change the type
of the work item. We can move it to a team project. We can also create a
new branch from here. We have lots of options
when we're doing our backlog and sprint
planning ceremonies. We can see on the
right-hand side over here that we have the planning tool that we have pretty easy to say. We've got the team backlog and you can see we have a current sprint. We have a good visual and seeing
how many user stories are there, how many bugs we have
in this current sprint, and how many user tests? We can also see the working days, the dates of the sprint. We can go ahead and
create a new sprint. I can give start and end dates. Now, Azure DevOps is
really intelligent. It sees that we've been
working 10 working days, i.e, two weeks and it gives us
that ability to add that in. I want to add it to the
existing audio demo, it's going to give us Sprint 4. Now we have a Sprint 4 that will be listed into our backlogs
that we can add tasks to. Same as the previous
boards that we had access to we can configure those columns. We have the work item type, the title, the state,
the iteration path. Maybe we want to add another column that works best for
our organization. Maybe we want to look at risk or something else when
we're looking at our planning, that can be a really
good thing to look at. I'm going to actually add severity down and we're going to add that in, so I can see the severity. We have a couple of these
that are a three priority. We might want to move them
up in our sprint cycle. The next thing we can do is look
at analytics of our sprint. Again, we still get
that same queue with a flow diagram and we can
see the sprint burn-down. Next time we're going to go to is
the sprints feature in boards. This is absolutely where
I spend most of my time. It is the easiest for me to read. I get full visibility of everything
going on in Azure Boards. This is a great place for PMs, your project managers
to see what's going on and it's great for you as an
engineer to create these tasks. Any product owner that
wants visibility into the product can also see
how much work is going on. This is where we drive
value to our organization. When we work for organizations
and they tell us, you're not driving a value or what
are you doing with your time? This traceability
helps give them that. That gives us an exact insight into what work we're delivering
and how we're delivering it. Here we can see that we have a
couple of tasks on the board, and we have a few tasks under each
user story and we have a bug. This breaks it down
for how we can see it. Again, we can configure
this to make this fit our needs as an organization
so we can add in tasks, bugs, and user stories. We can configure how
we want those to show. Again, we have styles, so we can add that styling
rule that if we want to make something bright green or
bright blue or light green, chance maybe, we can add those
styles in there as well. Each of these items in Azure Boards gives you that full
ability to configure it. We're actually going to make some
changes to our website today. That's what we're here to do. We can see that I've actually put in some document changes
for our website. I want to make some other changes to our website and I've
assigned some tasks. I'm going to go ahead and
assign this task here to Nana. >> We can estimate how much
time is going to take. I'm just going to put
an estimate for her. There are still all the four hours remaining and nothing's
been completed. We can also customize
these effort hours by using plugins and other things if that doesn't work for you
and your organization. But one thing we need to
do here is add a tag. We're not doing documentation, we are doing some
application deploy, so I'm going to tag it with that. There's nothing else we really
need to protect it with, but we have some suggestions here. We can add it to that. We can
also add website and it's a tag, and again, we can use these
tags for traceability. I'm going to go ahead
and save those changes. This is Nana's task, we can see
that there's a four next to it, that is our amount of effort. We can also see the tags down here. Again, we can use the style to
make them look a little bit nicer. Now here's where the sprints
board gets really exciting. We can see the Sprint 1 up here. We can also see Sprints 2, 3, and 4. We can see there in the future, so we can also look
at previous sprints. We can look at how our
work burndown rate was. We can see what was
successful and what wasn't. I can also hook into the
backlog here and see what's the backlog for
this specific sprint. It's a lot neater than
just the backlog board, and I can also see
the other backlogs. I can click into those and
drill down into those. Also, what's really good
to see is capacity. Now why is this really important? Let's say I'm going to
take some time off, or there's a bank
holiday coming off. We can put in teen days off. That means that two
week sprint cycle actually gets chopped
down a little bit more. If I have some personal
holiday coming up, I can go ahead and log that. I'm going to put in here that I
have six hours of capacity each day and I might be working
on development this project, and I'm going to go
ahead and save that. We can also look at analytics
based off the sprint. Now unfortunately, our burndown
trend is not so great, but we're going to fix that today. We can see how many
tasks we've completed, what our average burndown rate is, and what our user stories look like. It's a little bit embarrassing, but we're going to fix that. The other cool thing we can
do is we can run queries. Now if you were paying attention to all the other tabs in Azure Boards, they all have the
query option to run a query based off of
what you're doing. We're going to go into queries
and look at what queries can do. We can save queries and
this is the results. I've run a query already, and it will pull up
and these results. I'm looking at any type of work
item that has been assigned to me. I can see them here, and we have all sorts of ways we can use this. I can also look at the editor here, so I can see that I can add things to my query
and change my query. But right now I just want to
look at what's assigned to me. The other cool thing
is I can add a chart. I'm going to add, we
can do a pie chart, I like a pie chart, and we can
do the state of everything. We can see here that it generates
a pie chart, so I can see it. Now I can pin this to one of our dashboards and share
this with a wider team. Sometimes you might want
to look at the team burndown or the tasks in progress. This is a really
good way to see this and it updates it pretty regularly. I can see that I have six active
tasks two are in progress, seven are new, and five are
in the design phase though. I need to really think about
how I allocate my time, but I can use this to query know
what's going on in Azure Boards. The next thing I want to talk
about are delivery plans. Delivery plans are
really, really cool. Delivery plans are a feature of Azure Boards that are
fairly new to Azure DevOps, as part of a full-time feature. They like to review the schedule of your stories and feature
that your teams are working on and use that selected
view against your teams, against a calendar view. This is really helpful when you're managing several
projects and a team. When people come to
us and say, "Well, I'm managing so many
projects and so many teams. How do I get that visibility?" Delivery plans are the way forward. You can add up to 15 teen backlogs, including a mix of backlogs and
teams from different projects, as long as they're all under
the same organization. You can add a custom portfolio
of backlogs and epics. You can view work items that
span several iterations, and you can reset start, and date, and target times
through drag and drop borders, and you can add backlog items. I've created one for Agile features. If we go to the counter here, we can see the features in
our project and how they lined to each sprint.
This is really great. I can then add another delivery plan for another project or a
different view of the project. Maybe I want to look at work items versus the features
and compare the two. But this gives me a
great overview of what's upcoming in our
sprints and how we're doing. This helps me create a delivery plan and how
we're delivering against our features and our epics
in the customer projects. The next thing I want to
go into is retrospectives. Again, this is going to be part
of our ceremonies as engineers. Retrospectives are really
important to run after every single sprint and it's
built into Azure DevOps. It gives us the ability
to say what we did well, what didn't go well, and
what we could improve upon. Let's go ahead and create
a new retrospective, so I'm just going
to call it Retro 3. We can do all sorts of things
like make feedback anonymous. There are many times when we
want to make a retro anonymous, if there's a lot of maybe
emotional decisions that were made in a sprint, or maybe that's just how the
team operates and that's okay. The other thing we can do is select a template of how to run our retro. We could do things like
mad, sad, and glad, or what can we keep at, do less of or maybe do more of? We can add this into our retros, and I can go ahead and
look at the old retros, I can look at the
upcoming ones, etc. We can collect this
information here. We group the information, and then we can vote,
and then we can act. When we put this all in together, we can do a lot with this data. We can send this data off. We can export as a CSV. Again, this is where
you might want to have a retrospective
that is anonymous. You can also send an e-mail
to the team to remind him, "Hey, we have this retro today. Going to the next sprint, this might be really helpful." The other thing you can do is do
a show retrospective summary of all the retrospectives
you had and that existing wants to know
what actually happened, and have that traceability. The next thing we're
going to go into. I talked about this ability to
pin things to dashboards and create all this visibility
in Azure Boards. I've already created a dashboard here with some really
helpful features for me. So the first thing in this dashboard is I can see our build pipelines. I can see what's failed, what's been successful, and
how many builds run and when. This gives me a really
good visibility into, have the overall health of my build pipelines and
my release pipelines. I can also see the team here. We have six days
remaining in our sprint. I can see how many work
items aren't started, and maybe I need to as a PM, go to my engineers and say, "We need to reassess what
we're trying to deliver, and maybe move things
to another sprint, or to the backlog, or bring
an extra help sometimes." Again, we have this visibility, and I can see this
burndown rate right here. I can also configure a query
to run on this dashboard. You can literally configure this dashboard with any
of the built-in widgets that are here that suit
you as an organization. That's pretty much Azure Boards, and a couple of things to note, you can do all sorts of things
that hook into Azure DevOps. You can use Power Apps to help automate tasks creation
in Azure DevOps itself, maybe based off an event, or a risk, or something else. You can also hook in to third
party communication tools. You can also hook
into Microsoft Teams. So anytime a work item is aligned, you can update it in boards, and it will then update into
Teams to communicate with team. So we want to break
down those silos, enable everyone to work
better, and stay organized. Azure Boards is literally this huge encompassing
tool that gives the PMs a visibility to
know what's going on. The engineers can
create their tasks, can create their work items, and we can have visibility of how the sprints going and
where it is going to, and we can track everything
from beginning to end. That is boards and a
very big nutshell. When we were talking
about Azure Boards, we talked about extensions that
you could hook into Azure DevOps. There are so many different
extensions that you can use. Again, we talk about communication, breaking down those
silos in organization. Now think of that scenario when maybe there's been an outage
in your organization, or you're trying to get
a feature out the door, and someone comes and taps
on your shoulder and goes, "Hey, I want an update. What's going on with the project?" By opening up communication
to wider teams, we can create that communication, open that up and not have to have someone tapping on your
shoulder hopefully, I can't make promises. But we do have Azure Boards
capability here that hook into Slack, so it sends the
notifications into Slack. We also have it here for Azure
Boards into Microsoft Teams. Now as we start talking
about the other features, there are ways to hook in Azure
pipelines into teams so you get updates on when your pipelines are building and when
they're successful, or when I hit, maybe
manual intervention. But there are tons and tons of
extensions for Azure DevOps. Take a look at the
marketplace and look and see if any of the extensions are
right for you and your team. Now we're going to talk about Azure Repos as it
sits in Azure DevOps. This is really just our
source control system and it uses Git on the back end. It also uses team
foundation version control if you want to use a
centralized version control. But it allows you to
search your code, create a pull request, and
put some branches in that. Now, Nana is going to walk us through all the
technical parts of it. But I just want to cover
off a couple items. With Azure Repos,
it does handle Git, but it hold your code
in a secure location, basically on the Azure
backplane in the location in which your Azure tenant is located. So for instance,
Nana and I both sit, well, she sits in Europe. I technically don't sit in Europe
anymore, but I do, but I don't. But I sit in UK. So we're in
Europe, but we're really not. So for data residency, our data needs to sit in the UK, for Nana's purpose her data
needs to sit in Europe, and she doesn't want it in
the US or somewhere else. This is a really big
sell for Azure Repos. Why? Because of data residency, and so many organizations
are required to have this for compliance obviously, and doesn't allow their contracts when they're delivering software. Nana, take it away with Azure Repos. >> Once you create
those boards and once you have set up those tasks and you're basically
start this sprint, and you have bunch of bug fix tasks and feature
tasks on that board. You have to implement those, and for that you have
those Azure Repos, and this is where your code lives. As April mentioned, it's basically Git based so you have all the features that you
will have available in Git, which means you have all
the branching features, the pull requests, and so on. Basically, your project team may be working with
different Git workflows, and this can vary between the
projects, between the teams. Whatever Git workflow you are
working with in your team, you can implement with the Azure Repos feature because
this basically just gives you the tools and
allows you to implement that's here on the platform level. Basically, in this case, let's take an example and let's say we have a team that works
with feature branches, and for every task we, or bug fix, or feature, or whatever it is, we basically creates a branch. If I go to the branches, right now we just have one
main one and we can create a new branch based
off that main branch, and let's call this actually
feature/websitechange. >> You can actually link that feature to the work
items that you created, so this is the user story that
I'm going to link it with, change in the website, and I'm going to create
that feature branch. Now you see the connection
and the advantage of having one platform for all
these different workflows and all these different tasks, because I have the
boards here as well, and I have the Repos
on the same platform, I can actually have
connections between those and I can have backlinks and from ports I can connect to jump
into the future and vice versa. So this is actually
something that makes the workflow very efficient. So as you see here, we have a feature folder and then that's the name
of our feature branch. If I go inside I see that feature
branches view of the files, and in the file section, you have the list of
the files in your code. So let's say if I go to the source
and jump into the index HTML, and by the way, you
can edit the files here as well on the browser view. So if I go to Edit and let's
change something in index.html, let's say this is our
Azure DevOps demo website, and let's say I want
this to be the change. So I'm going to commit
that and this is going to go to the feature branch. So we're not doing anything or making any changes
in the main branch, and I'm going to commit that, and let's say I had worked on
this for a couple of hours and now this is in the state where I want to merge
it back to the main, I want to make it part of the deployable application
version, so to say, and the end result or end goal of any feature developmental change is to release it to the end-users. So I don't want to release it, and before or instead of just
throwing it to the main branch, we want to make sure
that the changes we've made in the Fisher branch
are actually releasable, the code quality is fine. Basically we haven't
broken anything else. Especially we have maybe junior
developers who don't want people just be pushing
stuff to the main branch, and for that, the main feature, and you probably know this from maybe other repository platforms
like GitHub or GitLab. We have pull requests
or merge requests, so if I go to the pull request, right now, we don't have any, but you can create polar case from the feature branch to merge
it into the main branch. So let's actually do that right now. I'm going to click on that. You can adjust all these data
like Title Description, whatever. You can edit the reviewers so again, a use case would be, let's say you made some changes and there's someone in the
team who is either more knowledgeable in that
specific code section or in that specific feature, like knows little bit
more technical detail. So you can add them as reviewers,
maybe senior engineers, and they can review your code
and basically decide, okay, this can be proved like
the code quality is fine, and this can be merged
back to the main branch. So let's actually
create our feature, our pull request, and now those people that you have, and I haven't edited any reviewers, so lets actually add the reviewer. Now once April sees
my my pool request, she can look through the code. She can see what changes
I have made, updates, the committee so she can have all
this information and view here, and if she lets say, besides this is actually
not quite the standard or something is missing or the
code quality, isn't, right. Whatever she can actually add
comments and feedback to basically, and this is way of sharing know-how and
knowledge between the team. So you can give feedback and teach people who have developed
this, hey, by the way, this is not really
correct the next time, this is our standard when
coding so let's do it this way, and so we can add something here. This case it's me so I'm
going to add some comments like these is creating. So basically you can communicate
within those pool requests, and that's also, again, going back to the collaboration and communication between the teams. You have all that
communication tooling merge inside those features as well. Now if April, awesome so if she approves
the pool request, then it can be merged
into the main branch. I have a message that says
can we change the name place. Awesome. So once April has
approved the pool request, I can actually complete
the request meaning I can merge it back into the
main so let's do that. Here you go, and you see that
the pool request is merging, and we're have our
pool requests open, and if I go back to the Files
and I'm in the main branch, we're going to see
the change that I did in the index.html file right here. This is basically how you
can use the Azure Repos. As April mentioned, the code is actually securely stored as
well on the Azure Cloud, and you can use all these features. Obviously, most of
you are going to be working from your local
machine with the code editor, and you have all these features for nice visibility as well as probably
the most important one being the pull request for
the communication and really enforcing the code
quality in the projects. >> Thank you Nana for
covering off Azure Repos. We're starting to see how the different facets of Azure
DevOps are coming together. Now we're going to hit
off Azure Pipelines. This is the Cicd tool
for Azure DevOps, and this is what a lot of you
have used potentially before, and hopefully you're
going to continue to use. So Azure Pipelines are really cool. You can literally
deploy to any location. You can deploy it on-prem any
Cloud, whether that's Azure, AWS or GCP, especially if you're
using a hybrid environment, you can deploy to both your
environments at the exact same time. The other thing you can
do is test on Linux, Mac, and Windows runners. So you can deploy against literally any operating system for any app. You can stage your
environment releases. You can do deployment
methodologies where you have automated deployments or maybe
even manual deployment approvals. But we're going to go through this step-by-step and Nana's
going to run with us and show us everything
about Azure pipelines. >> Now we've come to the
Azure DevOps feature, which is my favorite, and these are the pipelines. So once we have
created those boards, once we have implemented
those features, obviously we want to deliver that
code change or that feature, what fix, whatever we implemented
all the way to the end-user, So we want to basically
create processes or create a streamlined process
that takes our code changes, it testes it, it makes
sure that we have no security vulnerabilities and issues in our code
or in our artifacts. Even deploys it to the environment, but we don't want to deploy
directly to the production, we want to deploy in stages so we deploy it to the
development stage. We do some more extensive testing, and we deploy it to the next stage. Do some more extensive
testing and then go all the way to the production
environment where the end-users can now
see those code changes, use that feature or
benefit from that bug-fix. So this is what we're going
to build in this section. So we're going to go
from having no pipeline. So if we go to the repository code, we have no pipeline gates, so
we're going to start from scratch. This is a previous run. So we're going to build
a pipeline step-by-step. We're going to start
with the CI section. So first we're going to build the
part of the pipeline that takes our code changes and packages
it into a Docker image, and then also skins that image to make sure we don't have
security vulnerabilities, and once we have
that image artifact, we're going to take that and
deploy it to production in stages, development, testing
and production stages. So let's start with that. The first thing we do to create
a pipeline is the simplest step, which is create the file
called Azure pipelines.yml and the name itself is a trigger for Azure DevOps to recognize that
we're creating a pipeline file. So it knows to trigger
the pipeline immediately. So I'm going to create a
actually and MT Pipeline code, Pipeline file, and I'm going to commit this directly to
the main branch actually. >> I showed you previously that
we have this EditView here. We've basically just
some browser editor but what we're going to do is
we're going to actually edit our pipelines for KML file and
read the whole pipeline from the pipeline section so I'm
going to click here and to edit, and we're going to start
here from scratch and we see the same EditView here but we have something very interesting
on the right side, which is the tasks. This is basically the list of already prepared
packages of code that we can add into our pipeline
that do those tasks, for example building a Docker image, pushing the Docker image, deploying to a server, deploying to Azure web
application, and so on. So these tests are
actually going to help us and take a lot of work off from us. So instead of us
just going there and scripting different Linux commands, we're going to use those tasks that we can parameterize
and plus parameters to build our pipeline. So first of all, let's do
the first things first, and I'm going to define the trigger. The trigger is very simply just what triggers this
pipeline, In our case, we're going to say that any changes in the main branch will
trigger the pipeline, and this is, again, we don't need any webhooks here. Whenever something changes
in the repository, this pipeline, whatever
we're going to define here later will
trigger immediately. The second thing
we're going to define is something called a pool, and pool is basically where the
pipeline tasks will be executed. So we're going to define a VM image, and this is something
that April mentioned as well at the beginning of this video, where the task that we defined
building the Docker image, pushing it to the repository, or deploying somewhere
it has to run somewhere, we need some environment a machine
that will execute those tasks, and very conveniently
when you're actually starting with your DevOps
and with the pipelines, you don't want to go through
the trouble of setting up your own servers and agents
and connecting that to the pipeline and then having that server machine available
for executing the tasks. So conveniently actually
with Azure Pipelines, you have some servers
already available so pool of servers available
that will execute your tasks, and as April mentioned, they're
running on Azure Cloud, and this is what we're
going to leverage here. So we don't have to go through the effort of setting up
anything, at the beginning, what we're going to define here is what machine and what kind of operating system we want to
execute those pipeline tasks. In our case, let's choose the most simple ubuntu latest
operating system distribution, and now we're going to know
that our pipeline tasks will be executed on a machine which has
ubuntu latest installed on it, which means we can execute
any Linux scripts, Linux commands in our pipeline. Awesome. Now let's actually start refining the different parts
or stages of our pipeline. Stages is hierarchically, the uppermost level that lets us define what stages we're
going to have in the pipeline, what steps we're going to
take in the pipeline to take our core changes and walk
that through the testing, the building packaging, and so on. The first stage is going to
be building the application, and this is going to produce
the application artifacts, which is going to be Docker image, which has already become a
standard in the industry, so that's what we are
going to be building, and within the stage, we have multiple jobs one or more jobs. Let's define the first
job that is going to build and also push the image, and we're going to see the
code for that in a second. So build Push Image, and in order to execute the job, we have multiple steps. So this is basically the
hierarchy of defining your jobs, different tasks, or
steps of the pipeline. Once we have that
structure, basically, you're good to go we can then
fill out the rest of the stuff. Now we've come to the point where
we have to define the task and task is like the basic building
block of the pipeline. This is the smallest unit and
this is just to give you an idea, task is like Docker build, so building a Docker
image is a task. Pushing a Docker image is a task, so small unit of work, like one small step in
the pipeline, is a task. So let's actually define
that but as I said, we don't have to script it out ourselves with
some Linux commands, we can use existing tasks which
actually make this easier for us. So in the tasks, first of all, you have a long list of those
but we can search for Docker, and let's see what
we have available. We have a couple of those. I'm going to choose the
Docker one and you can actually choose different
commands like you can do Docker log in from here, Docker starts whatever and you
have built-in pushed together, which makes sense
because you want to do those steps usually in the pipeline and make sense
to put them together. So we have the built-in
push command selected here, and other thing we need to, in order to build a Docker image is obviously a Docker file and we
actually have that already. It's a super simple Dockerfile that just takes a
replication composite into the image and then starts
it with an entry points script, and the rest of the stuff, let's actually fill those up out. The first thing we have is
whenever we are pushing the image. So once we build the image
using the Docker file, we have to push that
to the repository, this is going to be
a private repository because we want to share
our images with everyone, and that private repository
needs to be authenticated with, so our pipeline needs to knock on the registry or repository
door and say, please, let me authenticate and let me
push this image and leave it here, So we need to authenticate with
our Docker private registry. So for that, we have the field here which
says Container Registry. So this one actually will be
the connection to the registry, and the second one is going
to be Container Repository. So this is just to give you the differentiation
what the difference between registry and repository is. Registry is the service itself. So the Docker container
or images storage service like GitHub or from Azure
Container Registry. Different platforms have
these available from nexus, for example so the service itself, the platform that has this service
available is the registry, and for each application, you can actually
create a repository. So when you're building the
same application image, you're going to produce
different texts. So basically you have
a repository for that specific application
where you store the images of the same application
with different texts so in the company went registry, you may have multiple repositories
for different applications, each one with their image texts, so that's the differentiation
and we need to provide both. This is the repository that we're
going to use and as I said, we have to first connect to it
because it is a private repository. So usually you will have
some Docker login command and you would have to
provide a username and password credentials to
log in to the system, and we could absolutely do that. But in Azure DevOps, we actually have a
better way of doing it, and this is actually a
very interesting concept and really interesting
feature because this is super useful generally in the whole pipeline because when
we're creating a pipeline, we're orchestrating a
lot of different things, we are connecting and
integrating also with a lot of external platforms and services, one of them being a Docker registry, we always going to need that. So we push the image
to that registry, and other obvious one is whenever
we're deploying, for example, our built artifact to
environment like a server, we need to connect
and integrate with Cloud Platform or on-premise
wherever that server is located maybe we're
integrating with some test platform to send the
reports of the tests or whatever. So we need a lot of different
integrations with external tools, and for all of those, we actually have this feature in Azure DevOps called
service connections, and this is actually
something that allows us to integrate and connect to
those external platforms. So if I go to the project
settings, so right here. >> In the service connections, this is where you can actually
add a new service connections, and this is a list of all the platforms that you can
connect to, like Bitbucket. If you are using code from a different source then Azure
code repository like GitHub, BitBucket, and whatever, you
can actually connect to those. We have AWS for
Terraform and lots of difference other platforms that you may need integration
in your pipeline. In our case, we actually have gone ahead and prepared those
service connections already. We have this one which
is for Docker registry. What service connection
basically does is connects to that external platform, and without you having to store
your username and password, just taking that account
connects through it, and now you can use that
connection in any of your pipelines to push the images so we don't have
to do Docker login again. Moving back to our pipeline, right here, this is the
service connection. That's the name of our service
connection here, ACR demo. This is the one we can very
conveniently select here, and that's going to be our
connection to the Docker registry. The second one here is the
name of the repository itself, and in our case, our repository is called
adodemo.azurecr.io. With this parameters
actually, and as I said, you can parameterize each task and pass any parameters
and different values. For example if you had
Docker file somewhere else and you needed some other tag, you could configure
all of that here. Once we're done with
the configuration, and you have to pay attention and make sure you click
on the right place. Right here we want
to end the task and this is going to be automatically
converted to YAML code, and that's what we end up with. This is the task with inputs or
parameters that we configured, and Docker task that basically, under the hood, does
all these Docker build, Docker push commands, and whatever, takes all these parameters and will hopefully give us
exactly what we want. In my case, I actually want
to add the tag specifically. Let's add that as well
so it's more transparent and clear. Very good. This is exactly what
we have configured, so what does this value
actually stand for? Whenever you see something
starting with dollar sign and vary within the brackets here, it is actually already environment variable that Azure
demos pipeline keeps you to use. This specific one is the build ID, so every time the build
ID will be different, so we're making sure that every run of the
pipeline will produce an image with a unique image tag. You can actually find a list of all the environment variables
that you have available for your pipelines that you can
actually use wherever you need. This is the one we're
using for the tag, and that is actually all we need to build and push our image
using the Docker task. Now, another thing we want
to do with our image, and this is what I mentioned
quickly previously, is usually a good security practice, especially when you're building
production grade pipelines, is to always make sure your images
do not have security issues. There are tools actually that
help you do image scanning. What image scanning
does is basically goes through the layers of the image. If you know the Docker image, for example, is composed of layers, so it goes through every
layer and it tries to find out if there are any known issues or vulnerabilities in the packages and libraries and tools that
are inside those layers, and basically warn you about those. If they are, then you get a recommendation
from this tool that says, use this version instead
to fix that issue because that known vulnerability was
actually fixed in that version. As a next stage actually
in our pipeline, we're going to add image scanning. We're going to actually add a
task that pulls the results of image scanning from the
Azure Container Registry. Now, and this is also
very important thing and also important reason why we're using Azure Container
Registry in this case, is that Azure Container
Registry actually has a tool called
Microsoft Defender, that it can integrate
inside the registry, that can do the automatic
image scanning whenever we push the image to the registry, which is awesome and
super convenient because as soon as the
image lens in the registry, the scanning will be
done and we will get the results in that registry. Now in our pipeline, however, we don't know what
those results are, so they are sitting in the registry. We have to explicitly
pull those results of the scanning and see in the pipeline whether
any vulnerabilities, any security issues were
discovered, and if there were, obviously we want to break the
pipeline because we don't want to release an image that has
any known security issues. First we want to fix that
and then maybe we can rerun the pipeline and release the image. We're going to add that
as a next stage here. That will go ahead and pull those image scanning results and give us the results
in the pipeline. Let's actually call this ScanImage, and we have the jobs. Let's call this one ScanImage. As you probably already thought, we are also going
to use a task here. This is how we're going to do
the image scanning in our case. I'm going to go here and
open our code repository. Right here we have basically a PowerShell script that is from the official
Microsoft source, basically what it does inside
is goes and carries the results from the Azure Container Registry and gives it back to the pipeline. We're going to execute
that PowerShell script, and for that we're going to
use a task called Azure CLI. This is going to be
just a simple task that is there to execute any
scripts that you have. Just like any other task, we have to provide some parameters. Obviously, we want to
tell our our script, where is that repository? Which registry repository it should connect to and get the results
from and which repository? We're going to provide
all these values here. First of all, we're going to
choose the service connection to the Azure platform where we
have our Container Registry. The second one is going
to be the script type, so we're executing PowerShell, then we have the script location, which is going to
be the script path. You can also provide inline scripts, so directly scripting it here, and let's actually provide the path. We're going to take another
environment variable that we have available
from our pipelines, from Azure pipelines itself, and get injected here. We have to provide an
absolute path here. We're going to do source directory. This is the directory
of our project. Then inside the script folder
we have these scripts. There you go. This is
the absolute path. We have to provide arguments
to the script that says basically which image registry
we are connecting to, which repository we
want to scan the image, and which image
actually want to scan. What's the tag of the image? I'm going to actually copy
those, that's right here. The registry name is ADODemo, so that's how the registry is called inside our Azure
Container Registry, and then we have the
repository parameter, and this is actually
this value right here, and we have the tag. We're going to obviously
use the same value. That's going to be the
image that we want to scan, the last build and pushed image. Now this is our action preference, so we're going to say in case these errors basically
want to stop the pipeline. With this configuration,
I'm going to put my marker under the steps and I'm going to add the
code, and there you go. We have the YAML
representation of that task. Again, we are using Azure CLI task and this is all the parameters
that we have passed, and now with this configuration, we're going to be building and
pushing our image, and afterwards, the image will be scanned automatically in the
Azure Container Registry, and with this stage
or with this task, we're actually pulling
those results and fetching those results here if the
results were unsatisfactory. If we found some vulnerabilities, then we're going to
fill the pipeline. Let's actually save this, and I'm going to commit this
directly to the main branch, and this will actually
automatically trigger the pipeline as we're going to see. If I go to "Pipelines",
there you go. You see that queued. >> Icon, and we have two
stages, BuildApp and ScanImage. Now our pipeline is in progress. Let's wait for this to finish
and let's see the results. As we see both stages were run and we see that BuildApp
was actually successful. If I click inside, we're going to see the logs of all the tasks that were
executed within that job, within the stage, within the job. These are actually a bunch of
automatically injected tasks. Do not be surprised if you
see a bunch of tasks here. This is the one that we defined, the docker one. This is green. It means our build and
push command or task was successful and we
were able to actually push our image to the repository. Somewhere in the logs we
will also see the tag that was used to take our image. This is the build
number which is 297. Later we can check
that we actually have that image taking the repository. The second one that you see
actually failed, the ScanImage. Let's actually see what the
issue is and as you see the logs are actually
very handy when we are doing
troubleshooting like this. In the Azure CLI tasks logs, we can actually see the logs
and we can see the reason why it failed in our case is because there was actually
image vulnerability. Let's go actually back to
our image repository and check all of that in more details from the Microsoft Defender
what their issue is. >> I'm logged into
Azure portal and I'm in my Container Registry
for this project. Now the reason why we're using
Azure Container Registry is because we want to
enhance the security and have full traceability. The first thing I
want to do is look at why our image actually failed. I'm going to go down to services, click on our Repository and click on the repository that we've
connected to in Azure. Now I can see a list
of all the images that Nana has pushed or I've
pushed from our pipelines. I can see the tag. Now if you've been paying
really close attention, we had tagged number 297. I can look at the tag here
and go image tag 297. I have full visibility
into what's going on. I can see everything that's
going on with the image, what's happening with it. I can track that
through and through. Now the other thing we've done
because we want to create full traceability and visibility
into our organization, we also need that security element. This shows our Container Registry, but I actually want to go and
look at why this failed and the more detailed logs that we can get are
actually going to help us. I'm going to go into
Microsoft Defender. The reason why we're
using Microsoft Defender explicitly in this case is
because it's multi-Cloud. If you've been paying attention, we're using Azure DevOps, which allows you to
deploy to any Cloud using any code or even On-Prem,
literally any platform. We want to have that
visibility end to end. Microsoft Defender, yes,
it is a Microsoft product, gives us visibility into
Azure, AWS, and GCP. We can look at our security posture across all those
different organizations. We can also look at our regulatory compliance
workload protections. But we really want to focus on what's happening in our
container scanning. If I pull in our registry again, I can click on Microsoft
Defender for Cloud. We can see exactly what's
going on with our registry. Now we have a couple issues
that we need to address. Two of them are networking issues, which we can add tasks
for in our board, actually create a bug and look to
address it later in our sprint, depending on the severity. Now we have a couple high severity
items that I want to look at. The Azure Container
Registry images have some vulnerabilities
that are found and they're quite high on severity list, so it does list them out. We've had two issues with, it looks like security
updates in our Linux images, and it looks like applies
to 37 of our 37 images. It's probably a pretty good thing. We need to create a task for and get a bug and a given a
priority 14 pretty quickly. But it tells us what version
of Linux are affected by this. We can also see the severity. It is a vulnerability
gives it a score. Then we have remediation. It gives us the patch
that we need to apply to our images to do this. Also the fixed version
that it applies to, so we can read up on potentially any other issues
that image patching could calls. But it also gives us a
list of every single one of our images that
have this vulnerability. The same will apply
for the other one. The other thing we can do to fix
this is we can create the task, which is what we can
do to move forward. But I'm going to hand
this back over to Nana. She's going to go ahead and take
us forward with our pipelines. >> Awesome. Let's go
back to the pipeline. As April mentioned, we actually did find an image vulnerability
that in this case can be handed back to the
team and we can make a bug-fix task or give it
back to the team to fix it. In our case, let's actually
take this part out and we're going to continue with the next step or next
stage of the pipeline, which is after building and pushing
the image to the repository, we want to take that
artifact and deploy it to the first stage or
first environment. In our case this is going to be a Dev stage to basically
make sure that our application is
successfully deployed and it's working and we can access
it from the browser and so on. Let's do this in this part. I'm going to create a stage. As we already know. Let's actually call
this, deploy Dev. We have jobs. Let's call the job
as well, deployDev. Again, we're going to use
or take advantage of a task here that will help us to
deploy to Azure web deployment. We can do Azure web or
Azure App Service deploy. This is a task that will
basically let us connect to Azure platform and deploy
our application or our image actually as a
container to that environment. Azure Resource Manager
is the connection type. The Azure subscription is
the one we've used already. This is the Azure account, basically, that we're connecting to. This is an important
one because we have to decide what service
type we're using. In our case, we have a container
that we want to deploy to, in our case we can choose
the Linux operating system. We are deploying web
application as a container. That's our service type. We already actually prepared multiple services for
different environments. We have Dev test and prod. The first one will be Dev. This is where the service
that we're deploying our container as application to. Now, obviously, so this is the target environment where
we are deploying our artifact. But we also have to tell the pipeline or the task
where to get that artifact, so where to pull that image from. This is again the registry or our Container Registry and repository
name tag for the image. We're basically
giving the address of the image and let's fill
out those details as well. By the way, for more information, next to each parameter, you have this full description of what values are expected
in each input field. You can check that here as well. This is going to be our
Container Registry name. You see our demo. We have here and we have
the image repository. Again, you can trick here
what value is expected. We have the repository name and the tag is going to be the
same that we've used here. The same image that we just built, we want to pull there and deploy. Again, going back or taking my cursor back here,
I'm going to add that. That's the task for deploying our application to
the Dev environment. Now there's one more thing that
we can actually optimize here, which is we're using these values for the container
registry repository Docker tag in multiple places. What we can do is actually
in order to avoid repeating the same values and
we're going to actually need those for deploying to
test and prod as well, we can extract them as variables. Right here I'm going to
define a variable section. The same way as pipeline or Azure, make some variables
available for us. We can actually, in
addition to those, we can define our own
environment variable. Those will be accessible
for all the stages, all the tasks in the pipeline. These are global variables. First let's define
the image registry and this is going to be ECR demo. Let's also create a variable
for image repository. That's going to be the value. Let's also put this value in
an tag environment variable. Now we can just go
ahead and reference those global variables that we just defined instead of
those fixed values. Just like you referenced the already existing
environment variables using dollar sign and brackets, you can use the same syntax
for our own variables. That's the syntax.
Again, dollar sign, brackets and variable name in here, we're going to add the tag. We have those values
used here as well. I'm just going to replace
them here as well. We have the registry and the different
attribute names for the same values and maybe
sometimes confusing, but that's why those
information icons are there so you can
check what value is expected for each perimeter. Awesome. We optimized our tasks or our stages and the pipeline code
not to repeat some of the values. Again, you can do this actually for each and every value
here if you want to. We can also substitute the values
for the subscriptions as well, or you can just leave them as well. You can optimize and use variables
for the values you want. With this actually we can also already test our pipeline and see if it deploys
to our Dev environment. To do that, we're just
going to click on Save, again commit to the main branch. This will trigger the
pipeline execution. >> Let's wait for those
edges to complete. Going back to the pipeline, there is one thing
that I want to note, and this is actually
literally confusing, threw me off a little bit
and I made a mistake, so make sure to correct that as
well before running the pipeline. Basically here we have
the place where we substituted the fixed-coded
values for the variables. We have containerRegistry here when we are executing Docker
buildAndPush command. This is actually the value for
IMAGE_REGISTRY_CONNECTION, so the service connection
name for Docker registry. This is the value for it. I just renamed the
variable name actually to IMAGE_REGISTRY_CONNECTION
with this value. Down here we have a DockerNamespace, which is actually
the IMAGE_REGISTRY. So I created its on
environment variable for that. The value of that is
ADODemo in my case, so make sure to set
those values correctly. This is the registry
connection name, and this is the
registry name itself. The repository names are the same. Make sure to make
those adjustments as well just like I've shown here, and with that change, we can actually run the pipeline. >> Now that our pipeline
has run successfully, let's take a look and see if our Dev environment
has actually deployed. Now I can see here in
the Azure portal is, and I can click on
the Dev environment and it pulls up our website. I can see that successfully
we've been able to deploy the changes that Nana
put in in her task. I also want to go into
the Dev environment here and we can look at the deployment
history into this environment, and this connects directly
into Azure DevOps itself. I can see that we've
deployed this now. [inaudible] production
because in the Azure sense, this is a production environment, but this is really tagged
as our Dev environment. We could potentially
change that wording, but that just sees it as a production elements, so
don't worry about that. This is our Dev environment, as you can see here, a ADODemo-Dev. But if I click on this build, this takes me back
to her exact build, that brand and I can
see those successful. We can see end-to-end
with that traceability of the build to our image
and to our website. Back over to Nana now
as she builds out the rest of the pipeline
for the other environments. >> Awesome. Back to the pipeline, now we actually have a
pretty good structure. We have the two main stages
of building and deploying, and the rest of the deployments are actually going to
be pretty similar. What we're going to
do is we're going to actually copy this whole stage, which deploys to Dev, and as the next stage, we're going to deploy to
the test environment. Because it's mostly the same logic, we're just going to
substitute those values. We're going to call this stage
and test the job DeployTest. We're using the same task that is deploying this time to Azure
demo test environment. That's the web application name, but we already have existing on our Azure Cloud and all
the other stuff will actually stay the
same because we are using the same service connection, we're using the same registry, so we want to take that exactly same image
that we've deployed on Dev. Once it's been tested and
made sure it works there, and as Afril showed, we were able to see the
application from a browser. We can actually take
that image now and promote it to the test stage. This is what we're doing here. Now, what is the actually
purpose of having a test stage? Most cases between the Dev and prod, you have some intermediary stages, could be tests or
staging or whatever, and it has its own use case. In this case it could
be, for example, if you have manual testers, because you don't have
so many automated tests for your application, you actually need someone
to manually go and actually test your application. This could be an environment
where the testers would go and actually test those features
and bug fixes that were deployed. If they find something
is not working, they can assign it back
to the Developers saying, your feature has this issue, please fix it and then you go
through the same life cycle of the Developing and releasing
that Developed code again. Now in this case, we could automatically, after deploying to
Dev, deploy to test. However, a realistic use case could be if manual testers
are working during the day, obviously, we would want
to automatically update the test environment with
a new version certainly. Instead, we may want
to do it maybe once in a day or once in two days
depending on your workflows. For this test stage, we can add basically a trigger
to that stage that says only deploy to test maybe at night when
nobody is working and we're not interfering with anybody's work by just randomly deploying a new
version on that environment. We can actually end that
trigger to our jobs, so gatekeep our stages
and add a condition that says only execute this task or this whole job if
this was scheduled. We can do that very easily using
a condition as I get just here. I'm actually going to copy the whole condition and I'm going to explain what
this means right here. Basically what this
condition does is it says if the previous
one was successful, so only if Dev deployment
was successful, they want to deploy your test, but not immediately only if
it was actually scheduled. We can then define when the
schedule will basically run. You can actually add that
schedule logic here using a cron job expression, or I actually find it better
to define it in the UI. What I'm going to do
is I'm going to go to the pipeline and open
and another edit mode. Right here in the edit mode you
have something called Triggers. This is where you can actually use
the UI to define the schedule. I find it easier first of all, and second of all, you can
actually use local time. If you add the schedule right here, you can decide when is
this build going to run, and you can choose
the days of the week, and you can use the time. For example in our test
case for a test deployment, we can say, you know what, execute this at 3:00 AM on
UTC time or in my local time. In my case, I am actually
in Central Europe, so let me choose my
local time, this one. We can say execute this Monday
through Friday at 3:00 AM. Again, when deploying it, we're not interfering with everybody's work so the next
day went the testers come, they have new version to test. In our case, we can
actually test this, so I'm going to use the
time that we have now. It's going to be 18, and let's give it two minutes. As you see, the configuration
is super-simple, that's why I prefer the white part better in
defining the cron job, so Let's save this. In two minutes, basically, the pipeline should get executed. While we are waiting for that
one to trigger the pipeline, we can actually keep on editing
and define the next stage, which is going to be deploying
to production environment. This is going to be
pretty much the same code because we're using the
same web app deployment, but we're deploying to
a prod environment, so we're going to change the
namings here everywhere. In this case, we don't need a scheduled deployment
because we're going to have a difference gatekeeping
here for production. As I said, we're deploying
to a prod environment, everything else is the same. We're taking the same image
that testers basically tested, basically decided it's fine, and I want to promote it to prod. However, again, prod is very
sensitive in many projects. Probably in most projects
people don't feel confident or comfortable just
deployed to prod automatically. There needs to be some
decisions made and lots of testing and security checks and management decision basically
involved in deciding, you know what, now we can release that version of the
application to the production. Also very realistic use
case would be to actually manually approve any deployment
to the production environment. This is going to be similar to the condition that we have here is a condition that
says, you know what, we are not automatically deploying, we are going to add a
manual approval job. In this case, it is
actually a separate job. What I'm going to do is
right before DeployProd, I'm going to add a
job and let's call this ApproveRelease and let's do steps. >> We're going to use a task
for which one actually? I'm going to do manual validation, so that's the task. We can basically notify users. Whenever the deployment is ready
to be deployed to production, we're going to notify these people. They get an e-mail, a job is waiting for your manual
approval, please validate it. We can also write
instructions here saying, please validate and approve
deployment to prod. Now, when we set a
manual validation step, obviously we don't
want to wait forever, but we also want to
give those people time enough to validate
and prove that. We can have some timeout, and when the timeout
is basically over, either we decide what the
pipeline is rejected, so automatically we
don't do anything, we don't deploy it to
prod or we resume. Depending on what we want to
do in our case, let's see, if nobody approves it, then we reject the pipeline
after the timeout is over. Moving my cursor here and
let's actually add this, and let's define the timeout. Let's define how
much time do we give the reviewer to
manually approve this, and after that, the
pipeline will abort. We can define time-out
on a job level, so we can say this job will
have a timeout of three base, so that's how long we wait, and we can also define a
timeout for the task itself. We can do that on the task level. Let's say that the task
has a time up one day, and there is one more thing
that we have to adhere, which is where this specific
drug gets executed, because otherwise it's not going to work if we execute it
on a Linux machine, and this is going to be a server. Basically, this is just going
to be executing in a sandbox where it doesn't need actually
lync server to execute, so we have to provide this parameter
here so it actually works. With this configuration, we should have deployment to Dev
happening automatically, deployment tests happening
once a day on the schedule, and then deployment to
production only happening if someone manually actually
proves that deployment. With that configuration, we can actually save this pipeline, and since the time of
schedule already past, we can also go back through
the pipeline and see that it was the
previous one actually. This one, so three minutes ago, so this was the schedule that we
defined was actually executing. I can actually cancel this one and
just run the current pipeline. Let's actually see what happens. First of all, we have
four stages now because we added deploy test
and deploy prod. Now, once the build
app is successful, it should automatically
deploy to Dev, so this stage will
automatically get executed, and after that, it will check
whether the time that we defined in the schedule or in the scheduler is actually
equal to the current time. If not, it will actually skip the deploy test and deploy
prod stages altogether. Let's actually wait and
see that happening. As I said, this is going
to skip the deploy test because we haven't configured
scheduler for that yet, so what I'm going to
do is, I'm going to actually cancel this one again. I'm going to create a
new schedule so that our pipeline actually
goes all the way through to the production so
we can manually approve it. Going back to the triggers, let's actually define again time that will trigger the pipeline in, let's give it one minute. Let's go. As soon as 24 hits, then we should see a
pipeline triggered. Let's wait for that a little bit. We have 624. Let's see. There we go. Our
pipeline is running, and this will actually execute
the deploy stage as well, because it was triggered
from the scheduler. Let's wait for that to go
all the way to deploy prod, where we'll have them to
manually approve the deployment. As you see here, the deployed app and deploy test stages were
executed successfully, and once it got through
the deploy prod stage, now is actually waiting
for the manual validation. You see here it says zero from
one menu validations passed, and if I click directly inside, it will actually show me the pop-up where I can either
reject with a comment. Let's say in a realistic scenario, the test environments was deployed, the test was tested, then a couple of discussions
whether to release it or not, and once the final
decision was made, it could approve or wherever it is rejects or resumes the pipeline. Optionally, you can
provide a comment that says why you reject
or why you resume, and we can actually
click on "Resume", and basically the approved release tasks that we just approved
will go jump to the next part, which is the point to product. Let's wait for that one as well, and you see it's successful, so now let's actually go and see the application deployed
in the prod environment. >> You can see we've
successfully been able to launch our website to our
production environment, so we deploy the exact same website
into different environments. Why? Because we want to be
able to test it whether we're running unit tests or UI tests, etc. and we can run that without
going to production directly and then very smartly setup or
pipeline to protect production. We can test our code or
image all along the way. We can also go into our
production environment in Azure, we can see where it's
been deployed to. The other thing we can see is again that build that she just
did, if I click on it, it takes me to her
build that shows us successful and I can see how
everything went through. We successfully were able to
deploy that through the pipeline. Back with Ananta to
finish off pipelines. >> Awesome. We have actually created a full pipeline that basically
deploys on three different stages. Based on or depending
on the environment, we have the respective
gatekeeping to decide whether we are promoting
to the next stage or not. Now, let's actually do the last stage or last
task in the pipeline, which is very interesting
from the concept of deploying to multiple
environments from your pipelines. Right now we are
deploying to Azure Cloud. What we're going to do
now is add a stage that actually deploys to
another prod environment, which happens to be on AWS platform. We're going to deploy to a very simple ec_2 server,
which already exists. I have it configured already, already has Docker installed on it. We can actually deploy our Docker
image to that environment. This is to showcase basically that you can take the same pipeline, you can go through the stages, and then finally, when we have two different
environments you can deploy to both or even to
more environments. Let's actually do that as well. Going to the edit
view of the pipeline, I'm actually going to grab the
whole deployed to prod stage. Because we're going to reuse
some of it and copy it here. Let's actually rename this to Deploy Prod Azure and its
call this Deployed Prod AWS, and we also wanted to have a manual approval for deploying
to AWS because it's also a pot environmental want to
protect it and make sure that no automatic or casual accidental
deployments happen there, so someone actually explicitly
decides to do that. That part will stay the same, we have the same manual
validation task, but this is the one that
we're going to change now. Instead of deploying
to Azure platform, we're going to deploy
to ec_2 server. The way we're going to do that is
actually using the most basic, simple low-level SSH command. This means actually that you can use the same code that I'm going to
show you now for any other server, whether it's on AWS or
on-premise or whatever it is, it is actually going
to be the same code. Let's actually do that.
That makes first of all. We have steps and we're going
to do this in two tasks. First task is the low level
script that I mentioned of SSH into the server and basically
and executing the command. We're going to use the script, which is actually something
that we haven't used yet. This is the first time and this
is a substitute for a task. As I said, task are ready scripts associated with
ready packages of scripts that, under the hood, have
this low-level commands, and this script is basically
just basic Linux commands. This is simple YAML syntax. If we want a multiline script, we can use a pipe. >> Pipe is basically
for YAML syntax for multi-line script or
multi-line string. On this level we can write
multiple lines of our commands. The command that we're
going to use is ssh. We want to ssh into the EC2
instance that as I said, I already have available. For that we need a
private key that we pass using this minus I option. Then we're going to substitute
that with the private key. Now I'm just going to put a
placeholder here and we're going to need a username of
the Linux machine. In my case, it's an Ubuntu machine, so that's the username, and we also have the IP
address of my EC2 instance, which I'm going to copy. That's the public IP
address of my machine. That's the Linux username, and we actually need
the private key. How do we get the private key? Well, I went ahead
and actually edit. If we go to the pipelines, actually edit the file already. If I go to the pipelines
library and secure files, I have my private key
already uploaded. That's the SSH private key
for connecting to my server, and that's what we're going to use. The way it works in the pipelines
if you want to actually use a secret file or
a sensitive file that you have in your Azure configuration we actually have a test for that, that downloads that secure
file on wherever environment this current job is
executing so it makes it available for the next
task in that job. Right here before that script, basically in the job steps as the first step we're
going to add a task. Let's actually add it from here. It's called Download secure file. The name of the secure file, which is this one right here. Click "Add" and there you go. That's the name of the
file that I just showed. Well, that's the name.
This is the task name. The resulting file
that we have we can reference it in a variable name, so we can call it sshKey. Now this is a nice part
because we can actually reference the downloaded secure file in the next task or next step of the job by simply using
the reference name. In this case,
secureFilePath. That's it. This gives us access to the downloaded sshKey file and this command will connect
us to the EC2 server. Now there's one more thing
we need to do before we actually use that
file, because by default, when it gets downloaded into
the environment it's going to have to lose permissions and we will get security
warning for using a secure file that has no
restricted permissions. We're going to fix that first. It simply is setting the
permission to 400 so nobody's able to read
it or execute it or have access to it
except for the file owner. Again, we're going to reference a
file like this and there we go. Now so this basically just
connects us to the server, but we want to actually
execute the command to run the docker
container on its server. We're going to pass in the docker
run command to the SSH command. We're going to do
that simply by using those characters and
on the next line, again, we can actually write multiple lines of any
docker commands we want. The first command will be to stop any running containers
before we start a new one. I'm just going to
copy the code there, which is an expression for finding
all the containers that may be running or not and removing them. With docker ps minus aq you basically get all the
containers are whether stopped or running and it basically stops them if they are
running and removes all of them. This makes sure that we have no container running
before we start up. Otherwise, we'll get an error
and it will not start up. The second one will
actually be to start our newly built image
container. Let's do that. Simple docker run command. We're going to run it
in the background so it doesn't actually hold our pipeline execution and it
just silently executes the task. We're going to find the
port of the container, which is 80 to host port 8080. Finally, we need the fully
qualified image name and the tech. Again, I'm going to copy that. This is basically the image
repository variable that we have defined right here as variable's
image repository and tag. This is the fully
qualified image name. In order to be able to pull and
run the image on the environment. Do not forget to close the
quotes that we opened here, right here at the end, and that should be it. Let me just check that. We have manual approval for deployment to the AWS platform
as well to EC2 instance. Here, in actual deployment, we are first grabbing that secure file that
we have available here, downloading it into
whichever environment executes this job and this task, and just with simple very
low-level SSH command. We're connecting to the server and we're running or we're starting
the container on the server. Let's actually execute this one and see the application
running on EC2 instance. However, let's actually
go back to the edit mode. This will actually
skip the test and prod deployments so just for
the sake of the demo, let's actually remove this one, the test stage because it only
runs if the job was scheduled. I'm going to save that
as well. There we go. We can cancel this one. I remove the deployed
to deploy test stage because it only run and gets to the production
if it was scheduled. Just for the sake of the demo
and the time we have removed it. After deploying to dev, it will wait for our
manual approval for both DeployProdAzure and
DeployProdAWS stages. Let's wait for the pipeline to
complete and see the results. Let's do the manual approval here. >> This will trigger another one
for deploy production on AWS. Let's resume this one as well. This should hopefully deploy to
EC2 instance, so let's see that. Our job or our task for
deployment actually failed. If we go inside and check the logs, we see the reason for that is that
host key verification failed. Basically, when you are
SSHing to a new host, for the first time you have to
prove that in interactive mode, and we're going to actually
have to fix that by adding an option here for
disabling that strict check mode. We're going to add that
option to the SSH command. Let me do that real quick. This will basically just
disable or shut off the host key checking request, and that should fix the issue. I'm going to save and
run the pipeline again. As you see, our deployed product
to AWS went through as well. Now we can actually check
that the application is available from the browser
or for that server. If I go back here and let's actually
grab that public IP address, and the port where we
exposed it on, there we go. We should see our application
successfully deployed on EC2 server. Awesome. Basically, we
were able to create a full pipeline that basically takes the code changes
that gets automatically triggered whenever something in repository changes including
the pipeline file, and builds the image
of the application, pushes it to our Azure
Container Registry. Then it deploys to development
environment before that we even had image
scanning that you can add to make it
more production grade. After that we have added
the comment to testing, which is triggered by schedule, and then deploy it
all the way to prod on two different environments
on Azure and AWS. That's our complete
end-to-end pipeline. We just saw a demo of building
a CI/CD pipeline where we basically produce an artifact which in our case
was a Docker image, and then we took that artifact and basically promoted
it all the way to prod, but going through the stages from, first of all deploying to the dev
environment and to test and then promoting it to production if
one of those stages went fine. Artifact is something that you actually generate from
your application. It's the packaged version
of your application. It's movable and it's
deployable to end environment. This could be a server
with operating system, this could be a platform
like Kubernetes cluster, this could be a serviced or
serverless platform, whatever. Artifacts are basically what we produce when we want to
deploy an application. Depending on the application and which technology or tech stack
we use to run that application, we're going to produce
different artifacts. As I said, Docker became the
standard artifact format, but there are also applications
that do not produce Docker or CI/CD pipelines
that do not produce Docker images and Teams that
still work with other artifacts. For other types of artifacts, for example that produce Java files or WAR files
from Java applications, that produce maybe zip
files or tar files from JavaScript applications
or [inaudible] packages from.NET applications. Those need actually
a story just like the repository where they
can store the artifacts. Azure DevOps actually also provides that feature called Azure Artifacts, which you can use as a storage
for these type of artifacts, types for different applications. Let's actually see
that inaction as well. >> Now we're going to go
over Azure Artifacts. When we connect artifacts, we can to any artifact, any package type, to any pipeline, and it helps us to share
our code efficiently. Now we can connect to a feed. If I go to connect to a feed, I can do a NuGet feed, I can do a Visual Studio feed. I can also connect things
like npm and Maven. If you're working in Python, maybe pip or twine. We can also just hook
in universal packages. The artifact repository in Azure
DevOps is as simple as that. You can also do your own custom
artifacts that you want to do, and share amongst your
different projects all across your organization, and it's just a secure
place to put it. That is Azure Artifacts
in a nutshell. >> Another thing that is also super, super important in CI/CD pipelines
and I mentioned that but we didn't actually have that
in the demo, is the test. Basically, going back to what
DevOps is and why we need it, DevOps is there to basically move any impediments
and anything that slows down the release process of taking our application changes and releasing it all the
way to the production. One of the major bottlenecks in the process is actually the testing. Because we can't release the
code changes if we're not sure that it's 100 percent working, it didn't break any
existing features or any existing things
in the application. For that we need extensive testing
and testing on many levels. You have the unit test, you have the application
performance test, functional test, and these test different aspects
of the application. In some cases depending on
application and how complex it is, it could be that you
have lots and lots of extensive automated tests, that you have test engineers dedicated for writing
those automated tests. But you still have some use
cases that you can't actually catch using those
automated test scenarios, so you still need a little
bit of manual testing. All those test cases are
basically something that your team has to go through
and check off the list, depending who is executing
them and who's checking them. Azure DevOps actually has
its own feature where you have a place to define
those test cases as well as have a unified view of all the tests that
you have to go through to make sure the application really complies to everything
and it is deployable. April will show that
to us as well in demo. >> Now we're going to cover
off Azure Test Plans. Now, again, these are
for manual testing. How do Azure Test Plans work? These are great for manual
and exploratory tests. Like Nana said, you
could have UI tests, unit tests, end-to-end tests,
we can have all sorts of tests. But this can be for
planned, manual testing. Where user acceptance
testing can be taken. We can also use this for
stakeholder feedback. If we're looking to
get feedback from marketing and sales teams
or maybe exec levels, this is a great way to do it. It gives us something to see
in a nice little pretty graph. But usually we tend to put our
testing in our CI/CD pipelines. We put our unit tests in
before we deploy our code. We look at different ways to
do this and this adds into our traceability element as a
high-performing DevOps organization. We won't be able to link these test cases and the test suites to features and our user stories. This is a really good way to do it. We're going to look at some ways
we can show off this reporting. To add a new test plan, here we click on "New test plan" and we enter a
plan name and I can just call it our test super, creative name, and I
give the area path. I can do ADODemo. Now, again, if you're working on multiple
projects with multiple teams, your area path would
potential change. I can also add this to an iteration. I can add this to
the existing sprint maybe Nana and I need to look at testing in this existing sprint with some manual tests that we did. I'll go ahead and
create the test plan, and from here I'll add a test case. Now, this will look like when
we create this test case, almost like a task and this will
then show up in our boards. I'm just going to call
this intra monitoring. We're going keep the design phase, again we've kept it in our ADODemo area and we
can add it to our sprint. But if I click on the ADODemo area, I can add it to infra and monitoring to get more
definition on our reporting. Then I can add steps here. I could say, review monitoring, and I can say pull the
monitoring reporting. >> Could maybe do some
load testing in there, and then produce
reports, and compare. But these are very manual
task that we've put in. I can add an existing work item
as a parent but maybe I want to add a link to an existing
item so let's see what we have. Well, we have some other
test plans in there. On a lattice add this to
our change in website. That way we have full
traceability and then, I can add some comments if I want to and I can add some link
on our development slide. Maybe if I want to open up a
branch to this, if I needed to, or I can open up a pull request, and basically I can
track this through a build if I also needed to. This helps us with the
traceability piece. I can also add tags. I'm going to go ahead
and add a testing tag. I'm going to go ahead
and save and close that test case and then from here
I can add other options here. I can do a new testing suite. I can add configurations, and I can assign testers
to run all the tests. I am myself a tester. I can also look for Nana. Look at that, I'm going to bring
her into this and we're going to send an e-mail to
run to our testers. We can send an e-mail
communication across. We can also send this to a
group of people as well. We can see the existing
tests we've put out. We can look at the
execution, we can chart it, and we can start looking
at how we do this and then we can start adding
in new test suites. We can do a static test suite, we can do a requirements-based
suite or query-based suite. Query is really helpful
to pull into the queries that we saw early in Azure
Boards and do it that way. We can query a work item type
with area path under ADODemo, and we can run this query. We get a few things
and actually they're all the other different test
cases that I've already created. I'm going to go ahead
and create this suite, and there's a lot more
we can do with it. That's pretty much the way we do it. I'm going to go back to
my other test plans. I did one for Sprint
one testing already. I did it for the website
configuration layout. I can do some basic testing, and I also want to
test our monitoring. I've already executed
those so I could get a bunch of runs on our environment. We can also see that
they both passed. We can see that as an outcome
on the latest two runs. I can also see the chart and see
the history of how it looked. Again, we want to get
that visibility in that reporting to maybe
the marketing people, so we changed our website. Maybe there's an impact on
marketing or sales because we made a really positive
change to our website, so we can pull these
manual tests out. The other thing we can look
at is our progress report. We can see that we
have one test plan with three different test points. We can see that two of the
three test points have run with a 66 percent acceptance rate. Then we have 100 percent pass
rate down here of those, and looks like we have
one that hasn't ran. We could look at maybe
manually running that or even automating that to run. We can also then configure perimeters around our
testings and get a lot more involved in the perimeters and
value set against our tests. I don't have any configured in this, but we would need to go through our configurations and
add a bunch that in. If you're looking to test against maybe a specific browser
or an operating system, you can add those in here, and then we can see all the test runs enlist that we could export, share, and see the history
of all of our test runs. Now that we've covered
off pretty much every aspect of Azure DevOps. We've seen the Overview,
we've seen the Boards, the Repos, the Pipelines, the Test Plans, the Artifacts. We've covered off every
single facet of Azure DevOps. But I want to go ahead and touch
on compliance real quickly. Because it doesn't matter how good we think a tool is
or how useful it is. The compliance piece is huge because
when we talk about security, security should literally be
in every single thing we do. I've actually govern this repository so we can get reporting
off the back of it. We've built-in
security from day one. We built in security into how we
authenticate into Azure DevOps, we've secured our pipelines, we've secured our images, we're securing all our resources
from beginning to end. But I also want to look at
the governance side of this. If any people work in the financial industry or
the healthcare industry, this is absolutely critical. This is scanning our repository where our code is sitting
for any vulnerabilities. Now this found a
vulnerability in here. It's an alert. I can look at it. We can see that it's come
from our checked-in files, so there's an issue
in one of our files. It's put it in as medium severity. It's given us a recommendation and actually a location
of where our code is. We can look at to upgrade
this and to fix this issue. We can dismiss it if we want to or open a task or a bug in our code. The other thing we can look
at are the components here. We can see all the components
in our repository, all the different releases
that are in there, and the packages, and how
they've been registered. It has a list of all
these components and these dependencies that
we're pulling our code from. Now compliance is really
critical if you're using things like open source
projects because that's huge. We can see again where
all these commits, we can see when it was committed, any changes that happened. We can see that 42
changes were made here. We can look at the bill number. We can trace the bill through, and we can look at the code
commit and who did it. We have this full
traceability end to end. We can also look at assessments. We can start compliance
assessments if we need to, and there's different products
you can hook in as well. There's always third-party
products I can hook in to help you write to be more compliant and to fulfill
that for an organization. We can add a new
product service here and start hooking that in
to what we want to do. This currently hooks
into service tree, and then you can browse all different third party tools that you can hook in for compliance. We're not going to go and set that all up because that's
quite in depth. But it's good to know that
that exists for Azure DevOps. I'm going to bring Nana back
in and we're going to wrap up, and hopefully you've
learned something today in our Zero to Hero episode. >> That was basically
our full demo of Zero to Hero of Azure DevOps and we covered a lot of different stuff, a lot of different parts
of different features, starting from the boards
all the way to compliance, and I really hope you guys learned a lot and you're actually
ready to use this platform now in your projects or you actually found some of the features that really solve some problems or
really fit into your projects, and you basically are excited to test it out
and start using the tool. >> Awesome. Thank you Nana for
joining me on this video today. Such an honor to have
you alongside with me to show up Azure DevOps,
and all the features. I want to thank
everyone for tuning in. We're going to put all the links in the show notes so you can access it. I just want to say, thank you
to everyone out there and go forth and right
some awesome code. [MUSIC]