>> Welcome to today's.NET
Community Stand up, managed languages and run time. I am your host, Bill Wagner, who forgot to hit
the Go Live button. I am sorry about that everyone
with me are two guests, Rich Lander and Chet Husk. Rich is a PM on the.NET team
responsible for containers and related goodness and Chet is also on the PM team responsible for
the SDK work in containers. As I start out, new to this
whole container thing. Rich, can you give us just a
good overview for everybody? Why is this so important? Why are we investing so much here? >> Sure, well, we're almost 10 years into
our journey on container, so definitely not
starting from scratch. Let's start with a problem to
be solved by that container solve so before what people would do is they would get a
machine or a virtual machine. Then they would install a
bunch of software on it, either manually or scripts. That was both time consuming but also you could never reliably say
this machine is in this state. What would happen is someone would
forget to install something. You deploy the app, it would fail or try and deploy two apps together that would have slightly
different dependencies. Those two apps would
actually turn out to be incompatible or the
last scenario would be, imagine that it's like the
Domino's Super Bowl situation, where all of a sudden they
need to get new machines deployed in order to handle the
capacity of the Super Bowl. Not a football fan at all, but it's a good description. They want to very quickly
deploy new machines. Well, let's get a
new virtual machine, let's then run those scripts. It's all super slow, not reliable so anyway, what containers do is
they basically say, let's take the operating system, the dependencies and the app
them all into a zip file. If you think about containers
like a Nuget package, it's actually not that
far from the truth. It's like let's put everything into one file and then we'll
just deploy that as a unit. Then it actually solves all
those problems that I just said. Because you basically have, it's a tarball, but a zip file
that has everything in it. You deploy it to a machine
that knows how to handle that particular zip file and then launch it and then
you're off to the races. That's the basic
problem that solves. There's a lot more to
it, but that's it. >> Containers like a pseudo VM that gets run inside a
VM or a physical machine? >> Yes, although the key
thing to think about is, people go down this container
as a VM thought pattern. I still think it's both a good analogy and a bad
analogy at the same time. The reason why it's a
good analogy is if you think of a VM as a VHD that contains
a particular set of software, then that part is true. The part about being virtualized
in that sense of VM's, I think is less helpful. It takes you down some bad paths. >> What would be different? What's the good path to be
thinking about it in that way? >> Just think about it
as a file that contains a single configuration
of software from the operating system all the way to the app and then once
you build that file, it'll always be that it will
never be something different. Whereas again, with VHDs, they're so easy to mutate, VM's, they're so easy to mutate such that from Monday to Tuesday
to Wednesday they could change multiple times such that
you no longer have confidence in the software that they contain and then you have
reliability problems. >> Based on that, how
does.NET play into this? Why are we doing things in the.NET runtime that makes it better for containers as opposed
to just any old app? >> Sure. Well, I'll
give this quick answer and then I want to give
Chet chance to participate. We started our container
journey with one, that's when we first
started to support Linux. That this would have
been like 2014, 2015, 2016 in that time frame was still a little relatively early in
on the container journey, all up relative to where we are now. Absolutely. If you want to
have a Linux native dev stack, then it has to support containers. There's not another option. We started producing
container images. Try to figure out how to enable
the workflows that users wanted. Everything just to
evolve from there, both based on customer feedback, The changing landscape,
landscape has changed a lot since those days. Then we had to make
a bunch of changes into the runtime as well to make it handle like Linux C groups
and stuff like that and so, the containers are like it's
just a necessary thing to do. It's like the not optional
scenario for a Cloud stack. >> You want to leap in
and add anything Chet? >> No, just plus one on all that. The really exciting part for me from my end of.NET
story of containers is we started with an
enablement mindset. We want to play in the space, we want to behave the way that containerization tech expects and
get our first steps in there. Then after a couple cycles, what is good about the way we've done this? What
do we need to tweak? That leads to deeper
integration with C groups and better patterns for deploying.NET
applications in containers. Now we're at this third
level which is okay, now we have some idea of patterns, how can we make the tooling do this on your behalf and that's the exciting part
we're at right now, where we're finally getting to where the SDK and our editor
toolings can help you, the developer benefit from this year long or multi year
long journey we've all been on. >> Can I just add one more thing, which is again like why containers? Well, it also turns out
that in the public Cloud, the container services
are super popular. If we just look at Azure, we've got AKS, ACA, App Service, also ACI. Those are all container
based services. It would suck if we built
this.NET thing and customers couldn't use it in a lot of services in the Cloud
that Microsoft maintains. I think we made some good
choices on that front. >> Okay. >> How does this tooling work? Do you want to dive into that? Into what I hit.NET Build, now what? >> Yeah, go check. >> Sure, yeah so the tooling side of this got
really exciting starting in.NET 7. We unleashed the capabilities of the.NET CLI with
respect to containers. What this means
practically for you as a developer and we will see a lot more of these later in
riches side of things. But for a normal web application I have some test Docker
files in here forget those, but you don't need a docker file to create a container image for
this application anymore. You can very simply do one single command say
you fall a container, you can do one command and
get a Docker container. This is like the workhorse that we've been building
editor integrations on. This isn't constructing
the container this is using all of
the information that MSBuild has about
your project to build the best and most compatible
container that we think you can. Some interesting things
to call out here, this is a web application. We're using the ASP.NET base image, we're using the version
of that image that matches the SDK that you're using, and by default we went to
your local Docker registry. If we look at that image in
the Docker tooling here, you can see this is a
fairly standard image here. Here's the layer that we created and all the rest of this we inherit from all the work that rich's teams do making the base images for us. That took 10 seconds and we had a perfectly production ready
container with no other files. That's the elevator
pitch for the tech and why I think it's a great stepping stone to building
more connected experiences. >> With what you built there that default container published profile probably get the name
a little bit wrong. Is that built in or is that something
I have to create for my app? >> For web apps that's
completely built in, you could.NET new web and then CD into that generator
directory and run that same command
and get a container. Now your name would be different because we infer some
properties about the container, but all of that is
something that you as the user driving this thing can control through
MSBuild properties. We have comprehensive docs
about how to do that. Yeah, my favorite question. Can the same be done through VS? The answer is yes, because we built the tech on top
of published profiles, Vs natively understands this. You can right click publish
on a web application in VS, and it will walk you
through a series of steps that you can do to choose where
to publish the application to, which can be Dockerhub, Azure container apps or any
compatible container registry. Then critically it will ask you how you want to do
that containerization. You get two methods there, by default you will use the
same command that I used, the tooling built into the SDK. But you can also choose
to use Docker files at which point VS will scaffold
out a Docker file for you. It's fully supported in VS from
a published perspective there. >> Jeremy has a good question, our good friend Jeremy. Can you address that one? That's a really good question
we get asked all the time. >> Absolutely, it's a
very astute question. It's a fine line to draw here, so for the default use
case the command that you saw me run I didn't specify
a destination for my container. We assumed you wanted to
go to some local Damon, we prefer Docker but thanks to a partnership with
our friends at Red Hat, we can fully support
podman here as well. If you're on a Red Hat image
and you run our tooling, we will write that image to the podman binary that you've
got available on your path. For the local push, yes we have some
features coming down the pipes, community contributed. There's a big thriving community
around this that will let you create that image
directly as a tarball, that can be imported
via Docker load or podman load or whatever
you want to do with it. Broadly, we're only
reliant upon Docker for the default deb
inter loop scenario, you can also push directly
to container registries without having Docker
on your local machine. >> We had another quick question bsed just on the demo
that you did chat, does it expose specific ports
on the default container? >> Yes and no. It does expose specific ports but as of about a
month and a half ago, it no longer exposes them
in a hard coded manner. Actually I can show this. One of the things
about containers is, let me go get the name
of my container here. Let's look at what ports, so this is a command that lets me pull out specific parts
of the container metadata. I've done that and I pulled out
the exposed ports and we see that we have exposed
port 8080 on TCP. Why have we done that? That's because the
image that we based ourselves off of
the.NET ASP.NET image, somewhere along its pipeline
actually set that exposed port. Actually we can see that in
the docker tooling here, right here is the
layer that came from the.NET runtime depths base
image that sets a user ID, the HTTP port for ASP.NET core and a few other pieces
of environmental data. We are aware of that in the
tooling and we're data driven, so starting in.NET8 and going
forward if you override the HTTPS port on your base
images or in our tooling, or if you set a different
app user ID and these are standard mechanisms that we expect all.NET base image users to use. Then this tooling will work with whatever custom
decisions you make. >> Cool. Now let's add this one, which I think is an interesting one. Can we run tests using
this same mechanism? >> It really depends. This is an area that we don't have strong practices around
and strong guidance on. Depending on how you've
architected your test, this tooling can help or hinder you. If you look at a typical multi-stage Dockerfile build that somebody
might have for a project today, you'll first have the build aspect, and then you'll copy the
build aspect to like a test stage that runs your test, and then you'll have a publish
stage that publishes the app, and then finally, an
application stage that copies the published assets from the published build into the
final runtime container. It's a lot of different stages, a lot to keep track of, and that's part of the reason we
made the tech in the first place. If you are running the test
in the Dockerfile build, this is not a good fit for you. I would encourage you to
consider other pathways like building the application into a container and running
your test against that container if you have an HTTP
service or something like that. That also gives you
something that you could potentially run against
your production deployed or staging
deployed instances to get some more confidence on. But it's an area
that we are evolving our thought process on
and would love to hear what you all come up
with to solve this, because it's not a solve problem. >> I think there was some
interesting layers underneath that, if we don't mind drilling
down just a bit. When you said, basically,
I'll be running tests against my container, if I think of the
difference between, say, unit tests or something
of that nature and an integration test
actually hitting port 80 or whatever ports configured and
running that whole pipeline, like I think of the difference
when I build Roslin, I submit my changes. I'm only running some of the
unit tests on my machine on the stuff I actually changed, and then I'll just open a
draft PR and let it run the umpteen configurations and the thousands of tests
and so on in the Cloud. But it's all unit test. What you were talking about,
Chad, there is more of a, I'm going to actually build the container and then
I'm going to run, basically, automated integration
test by hitting that port, and that's where this tooling
is more valuable then. >> Unit test, I would say, you've got a couple options there. Because this tooling packages a container for some potentially
a different architecture, if you have confidence
in your unit test, you could just run.NET test in your CI pipeline
outside of containers. Like, what value are
you getting from running your unit test in
the container at that point? That's the trade-off
in my mind there. >> I can provide some
insight on this too. I wasn't able to post
this in the chat here, but I posted it through
the YouTube thingy. Hopefully, people can see it. We did an experiment. This is ages ago now. It looks like it got posted, that's awesome, several times. Basically, when we started
doing all this stuff, we did look at unit testing, and I think that's what's
being asked about by the question since the
Dockerfile was mentioned. The thing that we
realized is testing in the Dockerfile in the natural
way is not super good, and the reason is that
if the tests fail, it means the Docker build will fail, and then that means you
don't have access to getting the log to figure out what
happen in a programmatic sense. Docker build doesn't
support volume mounting. There's no good way to get it. Instead what we did
is we came up with a pattern which is at this link, where what you do is you
use multi-stage build, and so one of these
stages is a test stage, which in the default run of Docker
build doesn't get exercised. If you just type Docker
build on your Dockerfile, you get an image
test, don't get run. If you want to do a test run, then you build to that stage, and then you do Docker run on
the image that gets produced. You then do volume
mount the directory where the test log
file will get written. Then the running of the container will not exit
just because the test fail. That's a Docker build
characteristic. Then you'll absolutely get
access to the test log. You can make the container
run as long as you want. It just offers a ton
more flexibility. It's still all in the Docker file. You still have only one file that
describes all of your pipelines, but just it gives a
lot more flexibility. That's what I would recommend, if you're going to the Dockerfile
route for unit testing. Jonathan says that is exactly what
we're doing right now. Awesome. >> Great minds think alike. >> Then there was one other question directly related to chats demo, which is right there. Great. You just showed
me on a new project. What about my favorite
project that already exists? Maybe I built it with.NET 7, maybe it's.NET Core 3.1, and I've been upgrading ever
since, what happens now? >> Well, first off,
big banner warning, 3.1 is out of support.
That's [inaudible]. >> I've been upgrading since
then. I've been keeping up. >> This works just as well
with the existing projects. From the 70400 SDKs onward, every web project just
has this capability. The only thing that
materially changes between NET 7 targeted project and a NET 8 targeted project is the
tag used on your base image, as well as the inference of what your user ID should be
or what your port should be, because those are
different pieces of data on the seven-base images compared
to the eight-base images. Works perfectly well with
the existing projects. We've been focusing on web
projects so far in today's talk, but it also works with
console projects as well. You have to add a NuGet package
in that case for today. But yeah, if you can
publish a project, well, I'm going to take a step back here, because UI projects don't work
especially well with Docker. But in general, console
and web applications should just work with this tool. >> That would include my Azure
Functions projects as well. >> Yeah, 100 percent true.
There's some caveats there because a lot of the
functions providers tend to have a lot of their own
bespoke tooling around getting your project into whatever shape that they want for their
deployment models. There's probably some massaging
that you will have to do to get it into that shape if
you use our tooling today. But part of my overarching
mission for this tooling over the next several quarters is to work with those teams across companies, across orgs to smooth that path
and make it truly drop it. >> We had one other
unanswered question so far. So go ahead and add some more. I think Richard mentioned
Podman as part of that. >> No, Chad mentioned that. >> Pros and cons, Podman
is a daemonless system. That is, I think, the
most stark difference that comes to mind
between it and Docker. They have similar CLIs, and for basic use cases should
behave fairly similarly. But Podman, like I said,
doesn't have a daemon, so there's no persistent
process that goes on there, and it has, I would say, more strict opinions
about the way that you should name and interact
with containers. For most folks, like
for simple usage, there won't be any difference, but it's preference for devs. Like in production, you're
not going to be using either the Docker engine
or Podman's engine. You're going to be using
something containerd or CRI-O or something more bulletproof. >> I wouldn't word it
quite exactly that way because Docker uses
containerd as well. >> You're right. >> What I would have said is, at production, you're going to be using something like Kubernetes. That is the more analogous thing. Kubernetes, obviously, brings
a ton more stuff into play, because it's like crazy
orchestrator-type system. But yes, most people do not
use Docker in production. >> This is where it starts
to get really complicated. But we have Docker desktop, that's what most people
are using for Dev. Docker desktop is not used
in production at all. Then we have Docker
EE from Mirantis. Some people use that in production
if you only want to run primarily one up or you want to use Docker compose or Docker
swarm or something like that. But if you want to microservices lifestyle and use
multiple containers at once, then that's how you end
up with Kubernetes. >> I will say one last
pro con of podman. It does not have the breadth of
support in compose as Docker does. Docker compose is really handy
and useful and full featured, and there's not parity between
the two, not 100 percent. Keep that in mind if you
to experiment with podman. >> We made some major changes in [inaudible] that I'm sure the folks on this call
would want to know about. >> That would be awesome. >> How about you tell them, chat, see what we did we do? >> What did you do? The
biggest thing is you enabled rootless
execution of containers. I think that's the
biggest umbrella to categorize some of
the changes under, and that involved,
I think two things, primarily one changing the user that the application can
run under and the second is ensuring that the port that
is exposed by default for ASB net is one that is capable of
being used by a rootless user. The old default of 80
required privileged Docker run times and privileged user permissions to use.
How did I do, Rich? >> Good. People sometimes get
used even on our own team. The user in all of our images
by default is still root. It's always been root, it will continue to be root, but we added a user that
you can optionally use, and we can demo that shortly. That was one big change. Now the friction to adopting non-root container hosting
is now dropped to almost zero. There's no reason not to adopt it. Where was I going with
that? That was it. >> I'm actually curious
about that one. My biggest use of containers I mentioned earlier
before we were on camera, it's just that our team has a
number of GitHub actions that are executing a.NET console app. So the action was configured, it's got a Docker file,
builds a container, builds the app from our latest tagged branch and then
runs a thing, then goes away. So would we gain anything from not
running that container as root? Is there some known security
concerns around GitHub actions, especially if they run in
forks or something like that? Is there anything there
that would help us do this more safely that we
should start adopting? >> Well, that's really defense in depth is the way to think about it. It just means I would liken it to driving a car
without a seat belt. Ninety-nine percent
of the time I drive the car in the city, on the highway, I don't actually need a seat
belt because nothing happens that ever suggest that
the seat belt was needed. I'm not getting into a car crash. How that relates to
containers is you could pull in a bad dependency that basically puts your container
image at risk in some way that you currently don't understand because you
can't predict the future. Running as non-root is like that seat belt that
gives you a whole layer of protection when it's needed and you can never
predict when that is. >> Now that makes good sense. >> That's literally what the whole concept of
fence in depth means. >> It is arguably better, shouldn't hurt us in any way and protects us against some things we may not be
thinking about right now. >> But think about it on
your own desktop computer. Windows moved to standard users. Back in the ancient times
when I was much younger, everyone run as admin. Every Windows machine
was run as admin. Most of the time that wasn't
a problem until it was, and it definitely did turn
out to be a problem at times. It's like Slammer,
Melissa, whatever, all those viruses were Trojans. Then everyone moved to standard user as a
defense in depth measure. We're never moving back like
because it's necessary for desktop computing to be
safe to run a standard user and have very specific
workflows to elevating. All the operating systems
run that way now, which is very good. This is exactly the same thing, but not in the same thing. This is exactly the same thing. >> You got me convinced. >> I do just want to
point out that with the SDK container tooling, we do default you operating in root less user mode for.NET
applications starting up. We've got that documented. It's not something
we're trying to hide. It is our opinion that the tooling should guide
you to the safest default, and we think that is just a safer
default in this day and age. >> Maybe now is a good
time for me to do a demo. >> Absolutely. We've got
about 20 minutes left. I've got a couple questions
tagged for answering later. We're good. Let's go for it. >> You can switch or do
I switch or you switch? >> I think I do. You want your screen up there now? There you go. >> Let's show you this. >> Can we make that
font a little bigger? >> Yeah, I'm going to. >> Thanks. >> Let's see where
I was even at here. My favorite app. So let's look at this docker file. Actually, we can bring it up VScode. This is a super simple app and it doesn't matter
what the app does. Actually, let's look
at the Docker file. Actually, there's one
line that is important. We're setting the user. I'm actually going to
run it the first time. Let's comment this out and then I'm going to make
this a little bit bigger. I'm going to run Docker build. Docker build minus T app. That should run pretty quickly
because I did this earlier. >> Now we're going to run it. Did I call it app? I hope so. Basically, this is a
diagnostic sample. You can see, I'm running on Arm64, the default image is Debian 12. This is RC1, but the really interesting
thing is we're running is the root user which I
just told everyone was terrible. Let's put this back in place. I'm going to explain. >> Shout to Jeremy for noting
that it was Arm64.net. >> Yes, Jeremy loves that. This is being run on the Mac. Although again, another thing some people get confused about is
when you're running on the Mac, the thing that's inside the
container images is still Linux. There is no Mac operating
system that runs in containers. Let's run this again. This is the normal way
because like I said, I commented before, it ran super quickly because
of Docker caching. The only thing it needed
to do was change the user. Let's run clear again. Let's run the app. Now you can see I'm running as this
specific user app. Now I just want to
explain one thing, which is, what the heck
is this app UID thing? It doesn't say app there, it says app UID. It turns out that here, you can either specify a
user name by its name, like app, or you can specify
by its numeric value. There are some certain
scenarios in Kubernetes where the numeric value is super
useful to have specified. As a result, we just decided to
always specify the numeric value. If I do something
slightly different, I can show you what
that numeric value is. If I do docker run entry points, bash, then app, and
then we can do minus. Now I'm just doing the search. We
just do this, this should work. Here's that app UID. It's specified to this
numeric value 1654. Then to show you what that is, let's do cat etc/password. I think that's where
this is located. Password grab app. Let's see if this works. Actually, let me just get rid of this so you can
see the whole thing. This is the password file in
this particular container image. You can see route here, it's the first user listed. There's a bunch of other crazy
users that come with Debian. Here's the user we
added which is app. It's numeric ID is 1654. Like I said, I don't want
to go into a great deal of detail but there's Kubernete scenarios were specifying the UID user ID instead of
the user name is beneficial. That's why back to VS Code, that's why we have this environment
variable specified instead. Like I showed you, this environment variable is defined in all of our
container images, so that you can do
all of this easily. That's all I wanted to
show, at least at this. Actually, let me just show
the sb.net version as well. Basically, the same thing. Let's not do Debian in this
time, let's do chiseled. What the heck? Chiseled is a scenario where the
user is set by default. That's why you don't see it here. If we went to here, the Debian one, you'd see that
it's the same line as before. Anyway, let's run this docker
build -t app -f Dockerfile. Hopefully, this won't take too long , building the app. Here we go. Now we're going to here, I'll just do clear again. Just get rid of that. Should have closed Teams. We do docker run. Now we have to set the port. This was a good demo to do. I was running as port
8,000 on my local machine, but it's port 8,080 by default
in the container image. Then we do app and then
we go back to the, actually, I'll open
it up in this one so that we don't get the Teams stuff. Local host 8,000, cool. Actually, edge does a
slightly nicer view of this, but this is just some JSON. You can see here. >> Can you zoom in a bit? There we go. Now I can see that. >> Now you can see
in the JSON that's output that we're running as app. This also has a web view. No, we don't need it quite as big. You can see we're running. Go ahead. >> Rich, you mentioned that
this was the chiseled version. That's not a Linux distro
I'm familiar with. Would you mind talking
about what that is? >> Yes. Chiseled images, if you're familiar with distros, chiseled is basically
the same thing. It's the Ubuntu version of that. Let's do a couple.
The whole thing of chiseled has a couple
of value props. One is that chiseled images
are incredibly smaller, and the other one, by
virtue of being smaller, they also have a lot less
attack surface area, a lot less components
to have CVs in, so the management of
them is a lot better. Let's show what the value is here. We'll call this Debian app. That should do it. We'll
just build this twice. Like I said, I built these before. We call this Ubuntu chiseled app. Now, let's look at docker images. Some of these will
be the wrong thing. You can see that. Crap, that's not right.
What did I do wrong? >> That doesn't make any
sense. Just a second. Oh, never mind. Ignore app. No, this is all correct. You can see for this app, the chiseled version
is 124 megabytes. When I used Debian, it was 257 megabytes, so chiseled saved as, it looks like 130 megabytes. My math is correct. For free. Not only was there no downside, we actually got value by using
chiseled because there's just less there to worry about there's
other defense in depth thing. >> Now that savings
is that just an on disc savings or is
that also in memory, once it's loaded savings? >> You can go to the
main screen now if you want so we can just talk about this. There's at least three sizes
that matter in container land. There's on disk footprint, that's what we call uncompressed, that's what we were just looking at. There's the compressed footprint, which is when your container
image is sitting in the registry that's in the compressed state when it's
being downloaded over the wire, either to your local machine or to
your production container host. That's the compressed size, which would've been obviously
smaller than those numbers. Then there's the amount of
in-memory goo that gets loaded. This difference that we
just saw absolutely affects the first two scenarios on disk
and on the wire or in registry. The in memory state
will be affected a whole lot less by this
particular difference. It's not every single bite that's in those images is loaded into memory before the app is run
or anything like that. It will have a
relatively meager effect on memory in your detainer host. >> I had a few questions
before we wrap up. A two parter here. Chet mentioned visual studio
generating a Dockerfile. Is it possible to do
the same from the CLI, can we get the Dockerfile
instead of the image? The reason is that there's
cases where we're asked to provide the Dockerfile rather than the actual pre-built
image so go back to the question scenario. >> Today? No, you cannot do this with
the tooling right now. It is the thing that should
be very easy to do though. Both VS and VS Code. The editor tooling for
both of those just have a rich templated experience. Where they are inspecting your project's properties and then filling those properties
into a template. We should be able to do
the exact same thing. It's just not a thing that the team has gotten to on the backlog yet. It's definitely one of
the issues in our backlog that is marked up for grabs. If this is a blocker for you and we haven't been
able to get to it yet, we would love to work with
you to get it implemented. We know that Dockerfile based deployment systems
are very preva***t and we need to have an answer there to be able to fit the
tech into those worlds. >> Which of the many.NET repos is this SDK in so that
Chet can get to that? I'm honestly not sure myself. >> A lot of the stuff
that I've been showing is documented and we
have issue backlogs and that thing at the one URL
that I have not gathered yet, I had my list of URLs. >> That's okay. >> Making a banner right now. It's here
dotnet/sdk-container-builds. The actual code is in
the proper SDK repo. But we should talk here first
before anything happens. But a lot of the stuff
that Rich has been showing is in the.NET Docker repo, which is a different place and
you can find all that there. That's a sample specifically,
but start there. >> Awesome. Then last couple
questions we had one just coming in. Should I move from.NET 7 to RC1 even if I don't plan
to release until 2024? For a reminder is that November
is the scheduled GA for.NET 8. >> There's no reason to wait. Get onto the new train. You get to use new features. I've already started
using C# 12 features. That's the version, C-sharp 12? >> Yes. Collection literals. >> Yeah. That's actually
probably the feature I use the most is
collection literals. >> I think that's true for me too. I'm finding slowly growing an affection for primary
constructors in a lot of places. >> I've started using
those two actually. >> I'll even just answer this one. No, not now. There's a lot of discussion in the C-sharp repo because there are two obvious answers and
they conflict with each other. The first obvious answer is, make it an array because
it's a collection. The reason that's not a good one
is that forces an allocation. The other obvious answer is, make it a span, which is great, unless you're using it in a
place where you can't use a span or if you actually want
to add something to it later. It is something that we're going to continue to revisit and try to get a natural type for any
collection literal expression. But it's just not solved yet. We didn't like any of
the answers we had. We had one other one from Jeremy. Does this work with Windows subsystem
for Linux in Windows Arm64? >> What is this in this case, if you're talking about the SDK container building 100 percent yes. One of the big upsides of that is the SDK is very platform agnostic. It enables you to build
across platforms and across rids left and right and
upside down, and twice on Sundays. That is also true for
these containers. You can be on Win Arm64
and Target or not Win X64, but like Linux 64, Linux Arm with no loss of fidelity. It's all just configuration and your built outputs for your project. >> Well, let me ask clarifying
question about that Chet, which is, what happens if you don't have a Docker Desktop
type solution installed? Like so I can't remember
if Windows Arm64 has a Docker Desktop
solution yet or if you're using WSL and you
just didn't install anything through WSL or on the
Windows side, what happens? >> Sure. You would not be able to push to a local Docker,
Damon in that case, but you could still use the SDK
to send that container image to an Azure Container
Registry docker hub, any of the container registries
that follow the OCI Open standard. >> Then we had nice one to close on which came quite a bit earlier. Is there a single place where
we can find all of this? I'm assuming they mean the
container registry and so on. >> Yeah. The answer
is is not exactly. One of my tasks for the fall
before we ship is to update the container docs and then make sure all of this is there so that is currently
not the case, sadly. >> As Rich gets that done, if you look at the just
what's new in.NET8, that should at least be one click away from where all of this lives, along with all of the
C-sharp 12 changes and all of the other enhancements in like everything should be
one click away from there. Here's what our goal is. Anything that you two
want to finish with? We've pretty much hit time. Awesome. I've actually learned more about containers and stuff than I did before we started. >> Yeah, I think my ending statement is we've been investing a ton in containers for close to a decade and I would say our investment level
is actually increasing. We see that this whole
modality is super important. >> With that, I'm going
to say, thank you. Unless you had anything to add Chet, just saw you nod in agreement there. Then with that, I will
say thank you very much, everyone for attending and
we will see you next time. [MUSIC]