Hi my name is Will Gries, and I'm
a Program Manager on the Azure Files team. In honor of Ignite,
let's talk about one of the key features that was
released for Azure Files over the last year: the ability for
an Azure file share to be domain joined to your on-premises
Active Directory domain. This enables you to replace an on
premises file server with Azure file shares. This can be done in
just five simple steps and actually only the first step is
really required. Just like an on-premises server, the very first
thing that needs to be done is to domain join your storage
account to your on-premises domain. So let's do that. So
here I have a storage account with a file share that I'd like
to be able to mount from on-premises. To domain join this
storage account, I'll open up PowerShell, and I'll import the
AzFiles hybrid module which gives me the cmdlets that
I'll need to do the job. The cmdlet that I'll use is
the Join-AzStorageAccount cmdlet. Just like a regular
Az PowerShell cmdlet, I'll need to provide the resource
group name and the storage account name on the object I
want to act on, which in this case is this particular storage
account; and the domain name of the domain that I'd like to be
domain joined; the resource type that I want to create, which in
this case is a computer account; and the organizational unit I
want to put this object in. And just like that I have domain
joined this particular storage account. Just like my on-premises
file server, before I can actually access this share from
on premises, I'll need to put a share ACL on either the storage
account or on the individual file share. And when I do so on
the storage account that share ACL will apply to all of the
shares underneath. In this particular case, I'll put it on
the storage account level. I only have one share in there anyway. And I do this by adding
a role assignment, so I'll add the role in this case I can
search for SMB share elevated contributor and I'll type my
name - I could add a group here as well, but I'll type my
individual user account and add that as a role assignment. Now that the role assignments
have successfully been applied, I can actually mount this file
share on-premises. Before we do that though, I want to refer
you to these links to our documentation, which will show
you how to do everything that I just did in your own environment.
Before mounting the file share, a best practice is to test the
network connection to Azure Files. I can do that by opening
up the PowerShell prompt and using the Test-NetConnection cmdlet to do this test. The computer name parameter is
the fully qualified domain name of the storage account, so
storageaccount file.core.windows.net, and the
common TCP port will be SMB. As you can see, this failed for
me. Like many of you, I am on a network that has port 445, the
SMB port, blocked. If this succeeded for you, you can be
done. You do not have to continue for further, but if
this failed we have an answer and the answer is to create a
private endpoint for your storage account. So private
endpoint gives your storage account a private IP address
within the IP address space of a virtual network. This enables
you to tunnel from your on premises network
into your Azure network, working around the port 445 issue. So to
create a private endpoint, I navigate to the private endpoint
connections tab on the left hand side of the screen; I click new
private endpoint; and then I provide the information for the
private endpoint resource like the name and the region. So there's actually no
requirement that this be the same region as the storage
account, but in this case I actually desire that, so I'll
create it in France Central, where my storage account was
created. I hit the next button and I need to input the
resource type, which in this case is the storage account,
and the actual name of the storage account, and then the
service that I want to connect to. The configuration tab shows me
the virtual network that I need to attach to, and it actually shows me another
important item, the private DNS integration settings that you
see underneath that, but for now let's put a pin in that and go
create the private endpoint. Click create, and just like that,
you end up with a completed private endpoint - I did speed
that up a little bit, but it's actually a pretty quick
deployment. These doc links will show you more about how to
create private endpoints. Now that I've created my private
endpoint, I'll need to set up that VPN tunnel between my on-
premises network or workstation and my Azure virtual network.
This is a rather involved process, and I actually don't
need to do this for every file share, I can do this just once, but I'm not going to show how to
do this in this video - instead, I'll direct you to these links
to learn more about how to set up either a site to site or a
point to site virtual private network, but I just wanted to
show I actually have done this in this case. So to demonstrate
this, I'll go into my private endpoint resource and I'll
actually look at that IP address for my storage account. So let
me just note that down. So I'll do the
Test-NetConnection again. And I'll instead supply the IP
address instead of the computer name and again SMB, and you'll see that this time it
actually succeeded. So that's exactly what I expected to
happen, and so my I know my connection is up and running. So
you might have noticed that I used the private IP address of
the private endpoint rather than using the fully qualified domain
name of the storage account. The reason that I did this is
because the name of the storage account will resolve via the
DNS look up to the public IP address for the storage account, rather than the private endpoint
that we just created. You can see this by using the
Resolve-DnsName cmdlet in PowerShell to resolve the DNS
name to a particular IP address. This is equivalent to nslookup
on Windows or on Linux if you'd prefer to use that. And as you can see, when I did
this resolution, I get back the public IP address of the storage
cluster that hosts my storage account in Azure. I want to
actually change this to point it at the private endpoint. So to
do that I have actually deployed a VM in Azure that will act as
an intermediary between my on- premises domain controller and
DNS server and my cloud private DNS zone that was created when I
created the private endpoint. So I called it CloudDNS and
as you can see, here's a remote desktop connection to
my cloud DNS server. You can see that I've actually
installed the DNS server role; I didn't install anything
else, this is the only thing I did before recording this
video. And then over on the left hand side of the screen
here, I have my on-premises domain controller and my cloud
domain controller. So I'll create a conditional forwarder
on my on premises domain controller for
core.windows.net and I'll point at my cloud DNS
server that I'm actually remoted into. As you can see, the validation
of this IP address failed: don't worry about that, that's
expected. Now if I go to my cloud DNS and create a
conditional forwarder there, I can forward the same name
core.windows.net to the special IP address inside of my Azure
vnet which actually covers the default Azure DNS service and
therefore the private DNS zone that I created. So I'll quick do
a refresh of the cache to make sure that I respond back with the answer that I expect. I'll
do a refresh of the client cache over here on my
client, and now if I do the resolution of the name I will
actually get back the appropriate IP address rather
than the public IP address. So you see that this resolves to
the expected private endpoint IP address that I wanted.
Before we move on, I'd like to show you the following
resources where you can learn how to do what I just did. A common question that we get is why you have to use the fully
qualified domain name of a storage account rather than some
other name like, for example, the name of the storage account or
an arbitrary name like an existing file server name. The
reason for that is because we use the fully qualified domain
name to actually find which storage account you want to talk
to. You can see this through the following cmdlet. So we
provide a Get-AzStorageAccountADObject which will get the object in your Active Directory domain,
which represents your storage account. This object contains a
property on it called the service principal name, which is
used as part of Kerberos authentication, to say which
server, which resource you're trying to access. If I look at the klist or
Kerberos list command, I can actually see that I've got a
Kerberos ticket issued for this particular share. So when I
access the share via File Explorer, Windows under the
covers will get a Kerberos ticket for that name and pass it
on via the SMB session to mount the file share. So now that
you've seen why you mount the file share with that particular
name, the next question that we often get is: is it possible
to use an alternate name like for example an existing file
server name to mount the file share and the answer is yes, you
can actually achieve that with the DFS-N. So, a reminder, this step
is absolutely optional - if you want to just start using your
file share, you can with the storage account fully qualified
domain name. So this is really about taking over an existing on
premises file server name. So here I have my file share again.
I've created a file on it since we last looked at this and now
I'm going to show you how to set up DFS-N. So I'll go back to the
main resource group, and you'll see that I've actually pre deployed
a VM to be my DFS-N server. And I actually have that open, so I can
remote into this, and you'll see that I've already installed the
DFS namespace or DFS-N server role to be able to act as
an intermediary and take over my existing file server name. To
make use of the DFS-N server role to take over an existing
file server name, I'm going to use a feature of DFS-N called
root consolidation. This features is a little hidden - it's
actually enabled via the registry: I have a PowerShell
script that will actually set the registry keys that I need,
so let me run this. And now that these registry keys are
set, I can go and create an A record for this particular
server name, which happens to be called MyServer. And I'll assign it the IP
address of this machine, and add host. Awesome. Done. And now I can
configure DFS-N to take over this name, so I'll add a new namespace. So I'll type in this
server's name and click next. I will type in the old server. I
want to take over the name for prepended with a pound sign or a
hashtag symbol, depending on which generation you're from. I'll click next. I need to select a
standalone namespace - the root consolidation feature only works
with the standalone namespace option. Click next, and
finally click create. Now that I have my created
namespace, I can navigate into it and create the folder target
for my share. So I'll call this just simply share and add the path to
my storage account, which can be seen with the fully qualified
domain name of the storage account and the share. Now that I've done that, I can
pull open my client side File Explorer and type in the name of
my old file server: MyServer, and the name of my share that I
want to access. And now you'll see that just like before I have
connected to my Azure file share with my old name and I can see
the file that I have in that file share. To learn more about
how to use DFS-N please visit the following link. Using these
five steps, I have replaced my on premises file server with an
Azure file share. If you have any
questions about this, don't hesitate to reach
out to us at azurefiles@microsoft.com
and we'll get right back to you. Thank you.