Month: January 2021

Shared Storage Options in Azure: Part 4 – Azure NetApp Files

Posted on Updated on

Reading Time: 10 minutes

Welcome to Part 4 of this 5-part Series on Shared Storage Options in Azure. In this post I’ll be covering Azure NetApp Files. We have talked about other file-based shared storage in Azure already with SMB and NFS on IaaS VMs in Part 2, and again with Azure Files in Part 3. Today, I want to cover the last technology in this series – let’s get into it!

Azure NetApp Files:

Azure NetApp Files (ANF) is an interesting Azure service, unlike many others. ANF is actually first-party NetApp hardware, running in Azure. This allows for customers to use the enterprise-class, high-performance capabilities of NetApp directly integrated with their Azure workloads.  I will note that you can also use NetApp’s appliance called the NetApp ONTAP Cloud Volume, which is a Virtual Machine that sits in front of blob storage which you can also use for shared storage, but I won’t be covering that here as the ONTAP volumes aren’t first-party Azure. There are however, along with ONTAP, a number of great partner products that run in Azure for these type of storage solutions. Check with your preferred storage vendor, they likely have an offering.

Before we jump into it, I’ll note that there are different configurations or operations you can do to tune the performance of your ANF setup, I won’t be going into those here but will be writing another post at later time on performance benchmarking and tuning on ANF.

Initial Configuration:

Azure NetApp Files is a bit different from what you would expect with Azure Files, so I’m going to walk through a basic setup here. First of all, ANF currently requires you to be whitelisted for ANF use, to submit your subscription you’ll need to use this form.

After you’ve been whitelisted, head into the portal and create an Azure NetApp Files Account.

Create Azure Netapp Files Account

After it’s created, the first thing you will need to do is create a capacity pool. This is the storage from which you will create volumes later in the configuration. Note: 4TB is the smallest capacity pool that can be configured.

Azure NetApp Files Storage Hierarchy

Reference: https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy

I’m using an automatic QoS type for this capacity pool, but you can read more about how to setup manual QoS. What is important to choose here is your service level, this cannot be changed after creating the capacity pool. I will talk more about the service levels later in this post.

Azure NetApp Files Capacity Pool

 

Later on I’m going to be using both and NFS and an SMB share. To use an SMB share with Kerberos authentication you will need virtual network with which to integrate ANF and your source of authentication. I’m going to create a virtual network with two subnets, one for my compute and one for ANF. The ANF subnet needs to be delegated to the Azure NetApp files service so it can leverage that connection, so I’ll configure that here as well.

Create Azure Virtual Network

Delegate Azure Subnet to Azure Netapp Files

 

Now that I’ve setup the network I’m going to create my compute resources to use in my testing environment. This will be comprised of the following:

  • Domain Controller
  • Windows Client
  • Linux Client
  • Azure Bastion (used for connecting to those VMs)

I’ll use the Windows client to test the SMB share, and will test the NFS share with the Linux Client.

Domain Controller

Create domain controller VM

Azure Bastion

Create Azure Bastion

Windows Client

Create Windows Client

Linux Client

Create Linux Client

 

Now that those compute hosts are all being created, I’m going to go create my NFS volume. I initially created a 4TB capacity pool, so I’ll assign 2TB to this NFS volume for now. I’m going to use NFS 4.1 but won’t be using Kerberos in the lab, my export policy is also set to allow anything within the virtual network to access it – this can be modified at any time.

 

Create NFS Volume

Create NFS Volume part 2

Create NFS Volume part 2

 

Alright, the NFS volume is all setup now and we’ll come back to that later to test on the Linux Client. Now I want to setup and SMB share, which first requires that I create a connection to Active Directory. I built mine manually in my lab, but you can also use this quickstart template to auto-deploy an Active Directory Domain for you . It’s also good to know that this source can either be traditional Active Directory Domain Services or the Azure AD Domain Services.

You will want to follow the instructions in the ANF documentation to make sure you have things setup correctly. I have my domain controller set to use a static IP of 10.0.0.4, named the domain “anf.lcl” and setup a user named “anf”. Now that this is complete, I can create the Active Directory Connection.

Azure NetApp Files Active Directory Join

 

Great! Now that we have that configured, we can use the connection in setting up the SMB share. I’ll use the rest of the 4TB capacity pool here and use the Active Directory connection we just finished to create the SMB share.

 

Create SMB Share

Create SMB Share - Part 2

Create SMB Share - Part 3

 

After this completes, you can jump into Active Directory and see that it creates a computer account in AD. This will be the “host” of the SMB share, and ANF will use this to verify credentials attempting to connect to the share.

Computer Account in Active Directory

 

Fantastic, now we have ANF created, with a 4TB capacity pool, a 2TB NFS share, a connection to Active Directory, and a 2TB SMB share. On the Volumes tab we can now see both of those shares are ready to go.

Azure NetApp Files Volumes

 

Each of the shares has a tab called “Mounting instructions”, I’m going to test the SMB share first so I’ll go grab this information. You can see the UNC path looks like an SMB share hosted by the computer “anf-bdd8.anf.local”, this is how other machines will reference the share to map it. Permission to this share can be controlled similar to how you would control them on any other Windows share, take a look at the docs to read more on how to do this.

SMB Mounting Instructions

With this information we can go use the Azure Bastion connection to jump into our Windows Client and map the network drive.

 

Mount Network drive in Windows

 

Mounted network drive in Windows

 

Voila! The Azure NetApp Files SMB share is mounted on our Windows Client. Now let’s go do the same thing with the NFS share: grab the mounting instructions, use the Azure Bastion Connection to connect to the Linux Client, and mount the NFS share.

 

Mount NFS volumeMount NFS volume - part 2

 

 

Cost, Performance, Availability, and Limitations:

Performance:

As noted earlier, there are three service level tiers in Azure NetApp Files: Ultra, Premium, and Standard.

  • Ultra provides up to 129 MiB/s of throughput per 1TiB of provisioned storage
  • Premium: 64 MiB/s per 1 TiB
  • Standard: 16 MiB/s per 1TiB

 

Remember that earlier I selected the standard (lowest performance tier) for my capacity pool, this tier is more designed for capacity situations than performance and is much more cost effective. With that said though, let’s do a quick performance test.

  • 2TB SMB share on the “Standard” tier
  • D2s_v3 Windows Client
  • IOMeter tool running 4 worker nodes, with a 50% read 4Kb test

IOMeter Test

 

The performance capabilities of ANF are a combination of 3 main things:

  • Performance Tier
  • Volume Capacity
  • Client Network Throughput

As I’ve mentioned in part 2 of this blog series, similar to managed disks, the performance of an ANF volume increases with its provisioned capacity. Also remember that Azure VM SKUs have an expected network throughput and this is important here because the storage in question is over the network. If the VM is only capable of 1,000 Mbps then depending on your I/O size, regardless of the ANF configuration, your tests will only ever perform at up to 1,000 Mbps.

Just to verify that the performance is tied to capacity, I’m going to increase the capacity pool and then double the size of the SMB volume from 2TB to 4TB and run the test again.

 

resize azure netapp files pool

 

Resize SMB Share

 

IOMeter test

 

We can see that the performance roughly doubled, with no change inside the VM (since we’re not yet hitting the Network Bandwidth limitations of that VM SKU).

Now let’s run the same test using the FIO tool on our Linux Client against the NFS share.

FIO on Linux

 

Again we’ll go ahead and increase the capacity pool then double the size of the NFS share and run the test again.

 

Resize NFS volume

FIO Test on Linux

 

Similar to the SMB testing, after doubling the size of the NFS share it also doubled its performance. Increases in capacity on the pool or volume can happen live, while the systems are running, with no impact.

As I mentioned earlier, I will be writing another blog post at a later time on performance benchmarking and tuning on ANF. In the meantime I recommend reading the ANF documentation on performance, for example this one on Linux Performance Benchmarking.

 

Availability:

Similar to what you would expect with a traditional NetApp appliance, ANF does support the use of snapshots. Keep in mind that your snapshots will consume additional storage on your ANF volume.

Azure NetApp Files Snapshot

 

As earlier noted, Azure NetApp Files is a true NetApp appliance running in an Azure Datacenter and is therefore subject to the same appliance-level availability. In addition, there is a 99.99% financially backed SLA on Azure NetApp Files.

Note: Cross-Region replication is currently in Public Preview so I won’t note it as an option yet, but will edit this post once it becomes generally available.

 

Cost:

Pricing for Azure NetApp Files is incredibly straightforward – you pay per GB x hours provisioned.

Currently Pricing ranges from $0.14746/GB to $0.39274/GB based on performance tier. Please see the pricing page for the most up-to-date information.

You can also see this documentation on Cost Modeling for Azure NetApp Files for a deeper dive into modeling costs on ANF.

 

Limitations:

  • While ANF is rolling out to more and more regions, since it is discrete physical hardware it doesn’t exist everywhere (yet) and may impact your deployment considerations.
  • ANF does not (yet) support availability zones.
  • Additional resource limitations can be found here: Resource Limits for Azure NetApp Files.

 

Typical Use Cases:

The most common use case for Azure NetApp Files is simple, if you need more than 80k IOPS. Now, keep in mind that IOPS isn’t always straight forward. IOPS (Input Output Operations Per Second) can vary greatly based on the workload – data size, and access patterns. For example, a machine is likely to have significantly higher IOPS if the data size is 4Kb rather than 64Kb, if all else is constant, x16 times more IOPS. Similarly, throughput (eg. MBps/GBps) will be higher based on data size. With that said, if a workload requires incredibly high performance with an application that isn’t designed to run on cloud-native platforms (eg. Blob Storage APIs, etc.) – ANF is likely the place it will land. Remember that (as of the time of writing this, January 2021) the most uncached IOPS a machine can have in Azure is 80,000 (see Part 2 of this blog series).

This comes into play often with very large database systems such as Oracle.

Another typical use case is SAP HANA workloads.

 

The third most common workload for Azure Netapp Files that I’ve found is in large Windows Virtual Desktop deployments, using ANF for storing user profile data.

 

Pros and Cons:

Okay, here we go with the Pros and Cons for using an Azure NetApp Files for your shared storage configuration on Azure.

Pros:

  • Incredibly high performance.
  • SMB and NFS Shares both supported, with Kerberos and AD integration.
  • More performance and capacity than is available on any single IaaS VM.
  • ANF is a PaaS solution with no appliance maintenance overhead.

Cons:

  • While it is deployed in most major regions, it may not be available where you need it yet (submit feedback if this is the case).
  • Does not yet support Availability Zones, Cross-Region Replication is in Preview.

 

Alright, that’s it for Part 4 of this blog series – Shared Storage on Azure Storage Services. Please reach out to me in the comments, on LinkedIn, or Twitter with any questions about this post, the series, or anything else!

 

Shared Storage Options in Azure: Part 2 – IaaS Storage Server

Posted on Updated on

Reading Time: 9 minutes

Recently, I posted the “Shared Storage Options in Azure: Part 1 – Azure Shared Disks” blog post, the first in the 5-part series. Today I’m posting Part 2 – IaaS Storage Server. While this post will be fairly rudimentary insofar as Azure technical complexity, this is most certainly an option when considering shared storage options in Azure and one that is still fairly common with a number of configuration options. In this scenario, we will be looking at using a dedicated Virtual Machine to provide shared storage through various methods. As I write subsequent posts in this series, I will update this post with the links to each of them.

 

Virtual Machine Configuration Options:

Compute:

While it may not seem vitally important, the VM SKU you choose can impact your ability to provide storage capabilities in areas such as Disk Type, Capacity, IOPS, or Network Throughput. You can view the list of VM SKUs available on Azure at this link. As an example, I’ve clicked into the General Purpose, Dv3/Dvs3 series and you can see there are two tables that show upper limits of the SKUs in that family.

VM SKU Sizes in Azure

VM Performance Metrics in Azure

In the limits for each VM you can see there are differences between Max Cached and Temp Storage Throughput, Max Burst Cached and Temp Storage Throughput, Max uncached Disk Throughput, and Max Burst uncached Disk Throughput. All of these represent very different I/O patterns, so make sure to look carefully at the numbers.

Below are a few links to read more on disk caching and bursting:

 

You’ll notice when you look at VM SKUs that there is an L-Series which is “storage optimized”. This may not always be the best fit for your workload, but it does have some amazing capabilities. The outstanding feature of the L-Series VMs are the locally mapped NVMe drives which as of the time of writing this post on the L80s_v2 SKU can offer 19.2TB of storage at 3.8 Million IOPS / 20,000 MBPS.

Azure LS Series VM Specs

The benefits of these VMs are the extremely low latency, and high throughput local storage, but the caveat to that specific NVMe storage is that it is ephemeral. Data on those disks does not persist a reboot. This means it’s incredibly good at serving from a local cache, tempdb files, etc. though its not storage that you can use for things like a File Server backend (without some fancy start-up scripts, please don’t do this…). You will note that the maximum uncached throughput is 80,000 IOPS / 2,000 MBPS for the VM, which is the same as all of the other high spec VMs. As I am writing this, no Azure VM allows for more than that for uncached throughput – this includes Ultra Disks (more on that later).

For more information on the LSv2 series, you can read more here: Lsv2-series – Azure Virtual Machines | Microsoft Docs

Additional Links on Azure VM Storage Design:

Networking:

Networking capabilities of the Virtual Machine are also important design decisions when considering shared storage, both in total throughput and latency. You’ll notice in the VM SKU charts I posted above when talking about performance there are two sections for networking, Max NICs and Expected network bandwidth Mbps. It’s important to know that these are VM SKU limitations, which may influence your design.

Expected network bandwidth is pretty straight forward, but I want to clarify that the number of Network Interfaces you mount to a VM does not change this number. For example, if your expected network bandwidth is 3200 Mbps and you have an SMB share running on that single NIC, adding a second NIC and using SMB multi-channel WILL NOT increase the total bandwidth for the VM. In that case you could expect each NIC to potentially run at 1,600 Mbps.

The last networking feature to take into consideration is Accelerated Networking.  This feature allows for SR-IOV (Single Root I/O Virtualization), which by bypassing the host CPU and offloading the network traffic directly to the Network Interface can dramatically increase performance by reducing latency, jitter, and CPU utilization.  

Accelerated Networking Comparison in Azure

Image Reference: Create an Azure VM with Accelerated Networking using Azure CLI | Microsoft Docs  

Accelerated Networking is not available on every VM though, which makes it an important design decision. It’s available on most General Purpose VMs now, but make sure to check the list of supported instance types. If you’re running a Linux VM, you’ll also need to make sure it’s a supported distribution for Accelerated Networking.  

Storage:

In an obvious step, the next design decision is the storage that you attach to your VM. There are two major decision types when selecting disks for you VM – disk type, and disk size.

Disk Types:

Azure VM Disk Types

Image Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types  

As the table above shows, there are three types of Managed Disks (https://docs.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview ) in Azure. At the time of writing this, Premium/Standard SSD and Standard HDD all have a limit of 32TB per disk. The performance characteristics are very different, but I also want to point out the difference in the pricing model because I see folks make this mistake very often.  

Disk Type: Capacity Cost: Transaction Cost:
Standard HDD Low Low
Standard SSD Medium Medium
Premium SSD High None
Ultra SSD Highest (Capacity/Throughput) None

Transaction costs can be important on a machine whose sole purpose is to function as a storage server. Make sure you look into this before a passing glance shows the price of a Standard SSD lower than a Premium SSD. For example, here is the Azure Calculator output of a 1 TB disk across all four types that averages 10 IOPS * ((10*60*60*24*30)/10,000) = 2,592 transaction units.

Sample Standard Disk Pricing:

Azure Calculator Disk Pricing  

 

Sample Standard SSD Pricing:

Azure Calculator Disk Pricing  

Sample Premium SSD Pricing:

Azure Calculator Disk Pricing

Sample Ultra Disk Pricing:

Azure Calculator Disk Pricing

 

The above example is just an example, but you get the idea. Pricing gets strange around Ultra Disk due to the ability to configure performance (more on that later). Though there is a calculable break-even point for disks that have transaction costs versus those that have a higher provisioned cost.

For example, if you run an E30 (1024 GB) Standard SSD at full throttle (500 IOPS) the monthly cost will be ~$336, compared to ~$135 for a P30 (1024 GB) Premium SSD, with which you get x10 the performance. The second design decision is disk capacity. While this seems like a no-brainer (provision the capacity needed, right?) it’s important to remember that with Managed Disks in Azure, the performance scales with, and is tied to, the capacity of the disk.

Image Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#disk-size-1

You’ll note in the above image the Disk Size scales proportionally with both the Provisioned IOPS and Provisioned Throughput. This is to say that if you need more performance out of your disk, you scale it up and add capacity.

The last note on capacity is this, if you need more than 32TB of storage on a single VM, you simply add another disk and use your mechanism for combining that storage (Storage Spaces, RAID, etc.). This same method can be used to further tweak your total IOPS, but make sure you take into consideration the cost of each disk, capacity, and performance before doing this – most often it’s an insignificant cost to simply scale-up to the next size disk. Last but not least, I want to briefly talk about Ultra Disks – these things are amazing!

Ultra Disk Configuration in Azure

Unlike with the other disk types, this configuration allows you to select your disk size and performance (IOPS AND Throughput) independently! I recently worked on a design where the customer needed 60,000 IOPS, but only needed a few TB of capacity, this is the perfect scenario for Ultra Disks. They were actually able to get more performance, for less cost compared to using Premium SSDs.

To conclude this section, I want to note two design constraints when selecting disks for your VM.

  1. The VM SKU is still limited to a certain number of IOPS, Throughput and Disk Count. Adding together the total performance of your disks, cannot exceed the maximum performance of the VM. If the VM SKU supports 10,000 IOPS and you add 3x 60,000 IOPS Ultra Disks, you will be charged for all three of those Ultra Disks at their provisioned performance tiers but will only be able to get 10,000 IOPS out of the VM.
  2. All of the hardware performance may still be subject to the performance of the access protocol or configuration, more on this in the next section.

Additional Reading on Storage:

 

Software Configuration and Access Protocols:

As we come to the last section of this post, we get to the area that aligns with the purpose of this blog series – shared storage. In this section I’m going to cover some of the most common configurations and access types for shared storage in IaaS. This is by no means an exhaustive list, rather what I find most common.

Scale-Out File Server (SoFS):

First up is Sale-Out File Server, this is a software configuration inside Windows Server that is typically used with SMB shares. SoFS was introduced in Windows 2012, uses Windows Failover Clustering, and is considered a “converged” storage deployment. It’s also worth noting that this can run on S2S (Storage Space Direct), which is the method I recommend using with modern Windows Server Operating Systems. Scale-Out File Server is designed to provide scale-out file shares that are continuously available for file-based server application storage. It provides the ability to share the same folder from multiple nodes of the same cluster. It can be deployed in two configuration options, for Application Data or General Purpose. See the additional reading below for the documentation on setup guidance.

Additional reading:

SMB v3:

Now into the access protocols – SMB has been the go-to file services protocol on Windows for quite some time now. In modern Operating Systems, SMB v3.* is an absolutely phenomenal protocol. It allows for incredible performance using things like SMB Direct (RDMA), Increasing MTU, and SMB Multichannel which can use multiple NICs simultaneously for the same file transfer to increase throughput. It also has a list of security mechanisms such as Pre-Auth Integrity, AES Encryption, Request Signing, etc. There is more information on the SMB v3 protocol below, if you’re interested, or still think of SMB in the way we did 20 years ago – check it out. The Microsoft SQL Server team even supports SQL hosting databases on remote SMB v3 shares.

Additional reading:

NFS:

NFS has been a similar staple as a file server protocol for a long while also, and whether you’re running Windows or Linux can be used in your Azure IaaS VM for shared storage. For organizations that prefer an IaaS route compared to PaaS, I’ve seen many use this as a cornerstone configuration for their Azure Deployments. Additionally, a number of HPC (High Performance Compute) workloads, such as Azure CycleCloud (HPC orchestration) or the popular Genomics Workflow Management System, Cromwell on Azure prefer the use of NFS.

Additional Reading:

iSCSI:

While I would not recommend the use of custom block storage on top of a VM in Azure if you have a choice, some applications do still have this requirement in which case iSCSI is also an option for shared storage in Azure.

Additional Reading:

That’s it! We’ve reached the end of Part 2. Okay, here we go with the Pros and Cons for using an IaaS Virtual Machine for your shared storage configuration on Azure.

Pros and Cons:

 

Pros:

  • More control, greater flexibility of protocols and configuration.
  • Depending on the use case, potentially greater performance at a lower cost (becoming more and more unlikely).
  • Ability to migrate workloads as-is and use existing storage configurations.
  • Ability to use older, or more “traditional” protocols and configurations.
  • Allows for the use of Shared Disks.

Cons:

  • Significantly more management overhead as compared to PaaS.
  • More complex configurations, and cost calculations compared to PaaS.
  • Higher potential for operational failure with the higher number of components.
  • Broader attack surface, and more security responsibilities.

  Alright, that’s it for Part 2 of this blog series – Shared Storage on IaaS Virtual Machines. Please reach out to me in the comments, on LinkedIn, or Twitter with any questions about this post, the series, or anything else!