[MUSIC PLAYING] KIM LEWANDOWSKI: So
welcome, everyone, to our session on
next generation CI/CD with GKE and Tekton. So I'm Kim Lewandowski, and I'm
a product manager at Google. CHRISTIE WILSON: Hey, everybody. I'm Christie Wilson. I'm also from Google Cloud. And I'm an engineering
lead on Tekton. KIM LEWANDOWSKI: So
before we get started, a quick show of hands-- how many of you are running
Kubernetes workloads today? Wow. Quite a few. Awesome. And how many of you
are practicing CI/CD? OK. Awesome. And then using Jenkins? Nice. Something else? OK, cool. Good [INAUDIBLE]. So today, we're going
to cover some basics. We're going to talk about
a new project called Tekton that we've been working on. We're excited. We have two guest
speakers joining us today to talk about their
integrations with Tekton. We're going to briefly
cover Tekton governance. And then finally, what's
in the pipeline for us. CHRISTIE WILSON: So
we're here to talk to you about the next
generation of CI/CD. And we think that the key to
taking a huge leap forward with CI/CD is Cloud
Native technology. So, I, for one, found myself
using this term Cloud Native all the time, but I realized
that I didn't actually know what it meant. So this is specifically
what Cloud Native means. Applications that are Cloud
Native are open source. Their architecture is
microservices in containers, as opposed to
monolithic apps on VMs. Those containers are
dynamically orchestrated, and we optimize resource
utilization and scale as needed. The key to all of
this is containers. Containers have
really changed the way that we distribute software. So instead of building
system-specific binaries and then installing
those and installing a web of dependencies,
we can package up the binaries with all of their
dependencies and configuration that they need and
then distribute that. But what do you do if you
have a lot of containers? That's where
Kubernetes comes in. So Kubernetes is a platform
for dynamically orchestrating containers. You can tell it how to
run your container, what other services it needs,
what storage it needs, and Kubernetes takes
care of the rest. And then, in addition to that,
Kubernetes abstracts away the underlying hardware,
so you get functionality, like if a machine that's running
your container goes down, it'll be automatically
rescheduled to a machine that's up. And in Google Cloud, we
have a hosted offering of Kubernetes called Google
Kubernetes Engine, or GKE. So this is what
Cloud Native ends up looking like for most of us. We use containers as our
most basic building block, then we dynamically
orchestrate those containers with Kubernetes and control
our resource utilization. And these are the
technologies that we're using to build Cloud Native CI/CD. KIM LEWANDOWSKI: Cool. So for those not
familiar with CI/CD-- and it sounds like
most of you are-- it's really a set of practices to
get your code built, tested, and deployed. So CI pipelines are
usually kicked off after a pre-submit workflow
and determine what code can be merged into a branch. And then there's CD, which
is what code changes you then deploy to a branch either
automatically or manually. And so what we've learned
is that there's not really a one-size-fits-all solution. There are projects
that just want something simple and just
something that works out of the box, and then
there are companies with really complex
requirements and processes that they must follow
as their code travels from source to production. So it's an exciting time for us
in this new Cloud Native world for CI/CD. So CI/CD systems are really
going through a transformation. CI systems can now be
centered around containers, and we can dynamically
orchestrate those containers, and using serverless
methodologies, control our resource cost. And with well-defined
conformant APIs, we can take advantage of that
power and not be locked in. But in this new world, there's
a lot of room for improvement. Problems that existed
before are still true today. And some are just
downright harder. If we break our services
into microservices, they inherently consist of more
pieces, have more dependencies, and can be difficult to manage. And the terminology
is all over the place, so the same words can
mean different things depending on the tool. And there are a lot of tools. It seems like every week,
a new tool is announced. So I can't even keep
up with all of them, and I know that our customers
are having challenges making their own tooling decisions. So it's great to have
this many choices but it can often lead to
fragmentation, confusion, and complexity. But when you squint at all
these continuous delivery solutions, at their core, they
all start to look the same. They have a concept of source
code access, artifacts, build results, et cetera. But the end goal is
always the same-- get my code from
source to production as quickly and
securely as possible. So at, Google we
took a step back, and after a few
brainstorming sessions, we asked ourselves if we could
do the same thing to CI/CD that Kubernetes did
with containers. That is, could we collaborate
with industry leaders in the open to define a
common set of components and guidelines for CI/CD systems
to build, test, and deploy code anywhere. And that is what the Tekton
project is all about. Tekton is a shared set of open
source Cloud Native building blocks for CI/CD systems. Even though Tekton
runs on Kubernetes, the goal is to target any
platform, any language, and any framework, whether
that's GKE, on-prem, multicloud, hybrid
cloud, tribrid cloud-- you name it. So Tekton started as a
project within Knative. People got very excited
to be able to build images on Kubernetes. But very quickly, they
wanted to do more. They wanted to run
tests on those images, and they wanted to define
more complex pipelines. And as enthusiasm
grew, we decided to move it out and put
it into its own GitHub org, where it became Tekton. So again, the vision
of this project is CI/CD building blocks that
are composable, declarative, and reproducible. We want to make it
super easy and fast to build custom extensible
layers on top of these building blocks. So engineers can take
an entire CI/CD pipeline and run it against their
own infrastructure, or they can take
pieces of that pipeline and run it in isolation. And the more vendors
that support Tekton, the more choices
users will have. And they'll be able
to plug and play different pieces
from multiple vendors with the same pipeline
definition underneath. So Tekton is a
collaborative effort, and we're already
working on this project with companies including
Cloudbees, Red Hat, and IBM. And we've made a
super big effort to make it easy for new
contributors to join us. And again, pipelines is
our first building block for Tekton. And now Christie we'll
be diving deeper. CHRISTIE WILSON:
So Tekton pipelines is all about Cloud Native
components for defining CI/CD pipelines. So I'm going to go into a bit
of detail about how it works and how it's implemented. So the first thing
to understand is that it's implemented
using Kubernetes CRDs. So CRD stands for Custom
Resource Definition, and it's a way of extending
Kubernetes itself. So out of the box,
Kubernetes comes with resources like pods,
deployments, and services. But through CRDs, you can
define your own resources, and then you create
binaries called controllers that act on those resources. So what CRDs have we added
for Tekton pipelines? Our most basic building block
is something we call a step. So this is actually a
Kubernetes container spec, which is an existing type. A container spec lets you
specify an image and everything you need to run it, like what
environment variables to use, what arguments, what
volumes, et cetera. And the first new type we
added is called a task. So a task lets
you combine steps. You define a sequence
of steps, which run in sequential order on
the same Kubernetes node. Our next new type is
called a pipeline. A pipeline lets you
combine tasks together. And you can define the order
that these tasks run in, so that can be sequentially,
it can be concurrently, or you can create
complicated graphs. The tasks aren't guaranteed
to execute on the same node, but through the pipeline,
you can take the outputs from one task and
you can pass them as inputs to the next task. So being able to define
these more complex graphs will really speed
up your pipelines. So for example,
in this pipeline, we can get some of the
faster activities out of the way in parallel first,
like linting and running unit tests. Next, as we run into
some of the slower steps, like running
integration tests, we can do some of our
other slower activities, like building images and
setting up a test environment for our end-to-end tests. So tasks and pipelines
are types you define once and you use again and again. To actually invoke those, you
use pipeline runs and task runs, which are
our next new types. So these actually invoke
the pipelines and tasks. But to do that, you need
runtime information, like what image registry
should I use, what git repo should I be running against. And to do that, you use our
fifth and final type, pipeline resources. So altogether, we
added five CRDs. We have tasks, which
are made up of steps, we have pipelines, which
are made up of tasks, then we invoke those using
task runs and pipeline runs. And finally, we provide
runtime information with pipeline resources. And decoupling this
runtime information gives us a lot of power
because, suddenly, you can take the same pipeline that
you used to push to production and you can safely run it
against your pull requests. Or you can shift
even further left and, suddenly, your
contributors can run that same pipeline against
their own infrastructure. So this is what the
architecture looks like at a very high level. So users interact
with Kubernetes to create pipelines
and tasks, which are stored in Kubernetes itself. And then when the user
wants to run them, the user creates the
runs, which are picked up by a controller, which
is managed by Kubernetes. And the controller
realizes them by creating the appropriate pods
and container instances. KIM LEWANDOWSKI: So today, I'm
excited to welcome engineers from Cloudbees and
TriggerMesh to talk about how they've been
integrating with the Tekton project. And I want to
highlight that they were able to do this
very quickly because we put a ton of time and
effort into onboarding new collaborators. So first, I'd like to
introduce Andrew Bayer on stage to talk to us about Jenkins
X and how Jenkins X is integrating with Tekton. ANDREW BAYER: So, hi. I'm Andrew Bayer. I am an engineer at Cloudbees
working on pipelines both in Jenkins
and Jenkins X. So who here has heard of Jenkins X? That's a lot of people. Who here has played with it
or is using it, et cetera? All right. Good. That's good. So in case you're not familiar
with Jenkins X, let me try and probably fail to
explain it very well and then get corrected. Jenkins X is a new CI/CD
experience for Kubernetes that is designed to run
on Kubernetes and target Kubernetes. You can use it for
building traditional and cloud native workloads. You can create new applications
or import existing applications into Kubernetes
and take advantage of various quick
starts and build pacts that allow you to
get the initial setup of the project, et
cetera, without having to do it all by hand. It gives you fully
automated CI/CD, integrating with
GitHub via Prow, so a lot of automation and
GitOps promotions, et cetera, without you actually having
to go click stuff by hand. It's got promotions, it's
got a staging, dev and prod environment integration, and
a whole lot of other magic. I'm here specifically to
talk about the part about how Jenkins X is using
Tekton Pipeline. So a user is not actually
going to necessarily know that they're using Tekton
Pipelines behind the scenes. We have our own ways of defining
your pipelines in Jenkins X, either via a standard
build pack or when you define your own pipeline
using our YAML syntax. Then at runtime, when a
build gets kicked off, Jenkins X translates
that pipeline into the CRDs that are necessary
to run a Tekton Pipeline. And then Jenkins X monitors the
pipeline execution, et cetera. So that means that, like I
said, the user isn't directly interacting with Tekton. The user's interacting
with Jenkins X. That means that we can do
a lot of things on our side without having to worry about
exactly how the user is going to react. So why are we using
Tekton Pipelines? Like I said, I've been
working on Jenkins pipelines as well for a while now,
and what we've come to learn is that pipeline
execution really should be standardized
and commoditized. CI/CD tools all over the place
have reinvented the wheel many times, and there's
no reason for us to keep doing that. So I'm really
excited about that. And we really like that we
can translate our syntax into what's necessary
for the pipelines to actually execute
so that we're still able to provide an opinionated
and curated experience for Jenkins X users
and pipeline authors without having to worry
about being exactly the right syntax and
verbosity, et cetera. And it gets us away from the
traditional Jenkins world of a long running
JVM, controlling all execution, which is good. But the best part
is, as Kim mentioned, how great it is to
contribute and get involved with Tekton Pipelines. I only got involved in this
at all starting in November. And we've been able to
contribute significantly to the project, help with the
figuring out direction, fixing bugs, integrating
it with Jenkins X, and get this all to the
point of being pretty much production-ready in
just a few months. And that's phenomenal. It's just been a great
experience and incredibly welcoming community. And it's been a lot of fun. I don't have an
actual demo exactly. Let me go back again. Sorry. Of course, my screen
went to sleep. Hold on. Typo. All right. So what I wanted
to show you here was just quickly how
much of a difference there is between the
syntax a user is authoring, and what Tekton
actually needs to run, and why we think
that's valuable. So this is roughly what a
obviously brain dead simple pipeline in Jenkins X would
look like, just 26 lines. And then when we
transform it, well, it's a lot more than that. But because we're able
to generate that and not require the user to
author it all by hand, we're able to inject Jenkins
X's opinions about environments, variables, about what images
should be used, and a lot more. And it's been
really great for us to be able to have the
full power of Tekton Pipelines behind the
scenes during execution without needing to make the
user have to worry about all those details all the time. So that's been really
productive for us. Thank you. [APPLAUSE] KIM LEWANDOWSKI:
So next, I'd like to introduce Chris
from TriggerMesh to talk about the
work he's been doing. CHRIS BAUMBAUER: Oh. Cool. Actually, yeah. So, thank you, Kim. Thanks, Andrew. Hi, everyone. My name's Chris Baumbauer,
developer with TriggerMesh, and also one of the
co-authors for the Tekton-- sorry, for TriggerMesh's Aktion. So starting off with
TriggerMesh's Aktion, this came out as more of a
way of tying in the GitHub Actions-- or, sorry, yeah-- the
GitHub Action workflow once it was announced
last October into creating a way of being able to bind
that with the Tekton Pipeline approach. The idea being that,
with that workflow, we can translate that
into the various resources that the Tekton Pipeline
now makes available, and then be able to feed that
into your Kubernetes cluster and be able to either
experiment with it or even create additional hooks
so that it can actually receive external stimuli through
something like Knative's eventing service-- things such as
whether it is going to be a pull request from
something like GitHub or maybe is some other web
form being filled out that triggers that build or
that workflow in the background to provide the result, and
ultimately allowing you to run things from anywhere. So a little a bit of a mapping
exercise for the terminology [? goes. ?] With
GitHub Actions, you have the concept
of the workflow. The workflow, it would be like
the equivalent of the Tekton Pipelines global pipeline. This is where anything and
everything runs task-wise. And then as far as
the GitHub Action, that is the equivalent
of that single step or that task where it's going
to be that one container that runs that command that
produces the output. And I do call out one
of the other components within the GitHub Action,
which is called Uses. This is more of
being able to define the image that you want to run
within your Tekton Pipeline task. Whereas, with the
Tekton Pipeline tasks expect a particular
image, what we end up translating with
the GitHub Actions-- or, at least, we will once
I finish my pull request-- is that we'll be able to allow
the full support that GitHub has for their Actions
as far as being able to point it to
another GitHub repo or point it to some other local
directory within that repo that you have defined to be
able to build the Docker image that you can then feed into
the task to work the magic, as it were. So as far as our
little pretty picture, with the TriggerMesh
Aktion, we actually have two commands that
handle everything. We have, down at the
bottom, the create. This one creates the tasks,
creates the associated pipeline resources, and you can also
use it to create the task run or the pipeline run object to
create a one-shot invocation, usually if you want
to test something out to see if you've got your
workflow working just right or if you wanted something
else to call into this. And up above is, we also
create a Knative eventing sink, as well as the
associated transceiver, which creates a serverless
service within Knative to handle the creation and the
invocation of that task run object. So to give you a little bit of
a quick demo of what we have, let's see if I can-- oh, perfect. So what we're
looking at right now is the customary
Hello World example. You'll note, up
at the top, we do have our workflow defined, this
one being more of our pipeline object. The results indicates
all of the actions that would be associated
with our workflow and the on usually indicates
the type of action that would happen within your repo. And then, right below
that one, for the action, you'll have some kind of
identifier, our naming it. The fact that we're using
centos as our base image, and, of course, we're
just going to run hello with a specification for
the environment variable. We do also see that
args does allow you to pass in either a
string or an array of strings to go in as well, which
is one of the nice things about their language. And as far as some
of our translation, we take care of that for you. So with the Aktion
command itself, as mentioned beforehand,
you have your create, you have your launch. One of the things
that we originally had started working on
is our own implementation of the parser for the GitHub
action workflow syntax, but since GitHub was kind
enough to open source it back in February so
that experimental projects such as these can
make use of it, we've started
transitioning to that one. So now it acts as more of
a sanity check on whatever workflows that you feed
in to ensure that it does what it's supposed to do. As far as the common
global arguments, we do allow for passing
in your Git repository. When it comes to creating things
such as the collection of tasks with the create
command, it is used for being able to not only
create a pipeline resource that could be referenced by
additional steps in case you wanted to add things to it,
but also in case of specifying that local directory
so that we know which repo to pull the docker
from to build the image. So now, OK. We'll feed that into
our Hello World. We just feed everything as is,
and we have our simple task object. You'll see our steps have
actually been broken down. We have our simple command. We have our
environment variables. We have our new
and improved image. And we also have
our name, which is been Kubernetified to resemble
their traditional naming scheme. And then we can pass in minus t
to provide our task run object. And then we have our
favorite, apply f. Hopefully, it'll still talk
to my Kubernetes cluster. And we have our objects
that are created. And it looks like
it just finished. So we have our true,
we have our success, and we also have a pod name
so we can go here and take a look at any output
in the case of failures or if there was
something that you wanted to grab as part of any of
the successful messages and other things
along those lines. And then, if we were
to look at launch, the one thing it does require
is that we do pass in a task. And let's see if I can-- come on. This one does also require
that you specify a GitHub repo. All right. And this one, so here we have
our eventing source definition specifying going in the GitHub,
the requests for pulling in our credentials,
and also the test that it will create and fire. So that, I believe,
is pretty much it. KIM LEWANDOWSKI: Awesome. So thank you, Andrew
and Chris, again, for sharing your work with us. Like I said before, we're
not doing this alone. And Tekton is actually
part of a new foundation called the Continuous
Delivery Foundation. This is an open
foundation where we're collaborating with industry
leaders on best practices and guidelines for the next
generation of CI/CD systems. And so initial
projects of the CDF include Tekton, Jenkins,
Spinnaker, and Jenkins X. Now, you've seen Tekton integration
with Jenkins X. We're also excited that we're starting
to integrate with Spinnaker as well. And so these are the
current members of the CDF, and we're really excited to
work with them on our mission to help companies practice
continuous delivery and bring value to
their customers as fast and securely as possible. So if you want to learn
more about the CDF, please check out
cd.foundation to get involved or just kind of
watch what's going on. CHRISTIE WILSON: All right. So what's coming next? So for the CDF, we have a
co-located summit coming up in KubeCon Barcelona on May 20. And if you're
interested in what's coming down the pipeline
for Tekton Pipelines, we're currently working
on some exciting features, like conditional execution,
and event triggering, and more. And for Tekton itself,
we're looking forward to expanding the scope
and adding more projects. And we recently had a
dashboard project added. And so for takeaways, if you're
interested in contributing to Tekton or you're interested
in integrating with Tekton, please check out our
contributing guide in our GitHub repo. It has information about how
to get started developing and also how you can
join our Slack channel and our working group
meetings, et cetera. If you're an end user,
check out Jenkins X, check out TriggerMesh
Aktion, and watch this space for more exciting Tekton
integrations in the future. And that's it. Thanks so much for
listening to our talk. [APPLAUSE] [MUSIC PLAYING]