Microsoft Azure

Azure App Service Private Link Integration with Azure Front Door Premium

Posted on Updated on

Reading Time: 5 minutes

Last week , Azure Front Door Premium went into Public Preview. While this did bring about some other cool features and integrations, the one I’m most excited about today is the integration with Azure Private Link.  This now allows Azure Front Door to make use of Private Link Services (not endpoints, which is what most people think about when they hear Private Link). Private Link Services allow for resource communication between two tenants, some of the most common use cases are software providers allowing private access to a solution running in their environment. Today I’m going to walk through how to connect Azure Front Door, through Private Link, to an App Service, without an ASE, the need to work with Private Link, DNS or anything of the sort. I believe this will become the new standard for hosting App Services.

With that, let’s get started! First, we need to create an Azure App Services Web App.

New Azure Web App

 

*Note* At the time of writing this post (03/01/2021) Private Link Service integration requires the App Service to be a Pv2.

 

Once the Web App is deployed, you’ll need the URL of the website and want to test it in a web browser. In this instance I’m not hosting anything in particular, simply hosting the sample page to show that it’s working.

Default App Service Page

 

At this point the web app is created, and you would expect to have to create a Private Link Endpoint now but since Azure Front Door Premium uses the Private Link Service functionality we can let Front Door do the work for us. With that said, let’s now go create the Azure Front Door Premium Service.

Azure Front Door Premium Creation

 

We need to make sure that the Tier is selected properly as the “Premium” SKU. After that radio button is selected, a section will populate below with different configuration options compared to the Standard Tier. The one we need to make sure to check is “Enable private link service”. After that’s selected, you will select the web app with which you want to establish Private Link connectivity from Front Door. If you would like, here you can also add a custom message. This will be what is displayed as a connection request in the Private Link Center in the next step.

Azure Front Door Creation

Azure Front Door Premium Creation

 

On the review page, we can see that the endpoint created is a URL for Azure Front Door and this will be the public endpoint. The “Origin” is the web app to which Front Door will be establishing private connectivity.

Final Configuration for Azure Front Door

 

Once Azure Front Door is done deploying, you will need to open up the Private Link Center. From there you will navigate to the “pending connections”, which is where you will see the connection request from Azure Front Door with the message you may or may not have customized. Remember that Azure Front Door uses Azure Private Link Service to connect it’s own managed Private Link Service to your Web App. You will need to “Authorize” the connection request in order for the connection to be created and allow Front Door to privately communicate with your Web App.

Private Link Setup

Private Link Center

Private Link Setup

 

After the connection is approved you will notice that the “pending connection” is removed, and has been moved to “active connections”. At this point, you will also notice that access to the Web App through a browser will return an error message the same way it would if you were to have added firewall rules on the Web App. This is because it’s being configured to only allow inbound connections from Azure Front Door.

Private Link Setup

No Public Access

 

If you want to modify any of the configuration settings, you will go to the “Endpoint Manager” section of Azure Front Door, where you get the familiar interface used by both Azure Front Door and App Gateway.

Front Door Configuration

 

In my testing, the time between clicking “Approve” in Private Link Center to the Web App being available through the Azure Front Door endpoint is anywhere between 15-30 minutes. I’m not quite sure why this is the case, though it is likely due to the service only being in preview. If you get an error message in the web browser using the Front Door URL, just grab a cup of coffee and give it some time to do its thing.

Once it’s all done though, you can use the Front Door URL in the web browser and see that it routes you to the App Service!

Azure Front Door Endpoint

 

There we go, all set! This is really a dream configuration, and something a lot of us have been looking forward to for some time. In the past we’ve done something similar with App Gateways, and Private Link Endpoints. The beauty of the solution with Front Door Premium, is that there is no messing around with DNS or infrastructure whatsoever – you can deploy this entire solution in PaaS while taking advantage of Azure Front Door’s global presence!

Click here to get started with Azure Front Door Premium.

If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!

DNS Load Balancing in Azure

Posted on Updated on

Reading Time: 3 minutes

This post won’t be too long, but I wanted to expand a bit on the recent repo that I published to Github for Azure Load Balanced DNS Servers. I’ve been working in Azure the better part of a decade and the way we’ve typically approached DNS is in one of two ways. Either use (a pair of) IaaS Domain Controllers or use Azure-Provided DNS resolution. In the last year or so there have been an increasing number of architectural patterns that require private DNS resolution where it we may not necessarily care about the servers themselves.

This pattern has become especially popular with the requirements for Azure Private Link in hybrid scenarios where on-premises systems need to communicate with Azure PaaS services over private link.

Reference: https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns#on-premises-workloads-using-a-dns-forwarder

The only thing the DNS forwarder is providing here is very basic DNS forwarding functionality. This is not to say that it can’t be further configured, but the same principles still apply. DNS isn’t something that needs any sort of complex failover during patch windows, but since it has to be referenced by IP we have to be careful about taking DNS servers down if there aren’t alternates configured. With a Web Server we would just put it behind a Load Balancer, but there don’t seem to be configurations published for a similar setup with DNS servers (other than using a Network Virtual Appliance) since UDP isn’t a supported health probe by Azure Load Balancers. How then do we configure a pair of “zero-touch” private DNS functionality in Azure?

When asked “What port does DNS use?”, the overwhelming majority of IT Professionals will say “UDP 53”. While that is correct, it also uses TCP 53. UDP Packets can’t be larger than 512 Bytes, and while this suffices in most cases for DNS there are certain scenarios where it does not. For example, DNS Zone Transfers (AFXR/IFXR), DNSSEC, and EDNS all have response sizes larger than 512 bytes, which is why they use TCP. This is why the DNS Service does (be default) listen on TCP 53, which is what we can use as the health probe in the Azure Load Balancer.

The solution that I’ve published on Github (https://github.com/matthansen0/azure-dnslb), contains the template to deploy this solution which has the following configuration.

  • Azure Virtual Network
  • 2x Windows Core Servers:
    • Availability Set
    • PowerShell Script to Configure Servers with DNS
    • Forwarder set to Azure Multicast DNS Resolver
  • Azure Load Balancer:
    • TCP 53 Health Probe
    • UDP/TCP 53 Listener
Azure DNS Load Balanced Solution

This template does not include patch management, but I would highly recommend using Azure Update Management, this way you can setup auto-patching and an alternate reboot schedule. If this is enabled, this solution would be a zero-touch, highly-available, private DNS solution for ~$55/mo (assuming D1 v2 VM, which can be lower if a cheaper SKU is chosen).

Since I’m talking about DNS here, the last recommendation that I’ll make is to go take a look at Azure Defender for DNS which monitors, and can alert you to suspicious activity in your DNS queries.

Alright, that’s it! I hope this solution will be helpful, and if there are options or configurations you’d like to see available in the Github repository please feel free to submit an issue or a PR! If you want to deploy it right from here, click the button below!

Deploy to Azure

 

If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!

Shared Storage Options in Azure: Part 1 – Azure Shared Disks

Posted on Updated on

Reading Time: 4 minutes

In an IaaS world, shared storage between virtual machines is a common ask. “What is the best way to configure shared storage?”, “What options do we have for sharing storage between these VMs?”, both are questions I’ve answered several times, so let’s go ahead and blog some of the options! The first part in this blog series titled “Shared Storage Options in Azure”, will cover Azure Shared Disks.

As I write subsequent posts in this series, I will update this post with the links to each of them.

 

When shared disks were announced in July of 2020, there was quite a bit of excitement in the community. There are so many applications that still leverage shared storage for things like Windows Server Failover Clustering, on which many applications are built like SQL Server Failover Cluster Instances. Also, while I highly recommend using a Cloud Witness, many customers migration workloads to Azure still rely on a shared disk for quorum as well. Additionally, many Linux applications leverage shared storage that were previously configured to use a shared virtual disk, or even RAW LUN mappings, for applications such as GFS2 or OCFS2.

Additional sample workloads for Azure Shared Disks can be found here: Shared Disk Sample Workloads.

There are a few limitations of shared disks, the list of which is constantly getting smaller. For now, though, let’s just go ahead and jump into it and see how to deploy them. After which, we’ll do a quick “Pros” and “Cons” list before moving on to the other shared storage options. I deployed Shared Disks in my lab using the portal first (screenshots below), but also created a Github Repository (https://github.com/matthansen0/azure-shared-storage-options) with the Azure PowerShell script and an ARM template to deploy a similar environment – feel free to use those if you’d like!

As a prerequisite (not pictured below) I created the following resources:

  • A Resource Group in the West US region
  • A Virtual Network with a single subnet
  • 2x D2s v3, Windows Server 2016 Virtual Machines (VM001, VM002) each with a single OS disk

Now that those are created, I deployed a Managed Disk (named “sharedDisk001”) just like you would if you were deploying a typical data disk.

On the “advanced” tab you will see the ability to configure the managed disk as a “shared disk”, here is where you set the max shares which specifies the maximum number of VMs that can attach that particular disk type.

After the disk is finished deploying, we head over to the first VM and attach an existing disk. You’ll note that the disk shows up as a “shared disk” and shows the number of shares left available on that disk. Since this is the first time it’s being mounted it shows 0.

After attaching the disk to the first VM, we head over and do the same thing on VM002. You’ll note that the number of shares has increased by 1 since we have now mounted the disk on VM001.

Great, now the disk is attached to both VMs! Heading over to the managed disk itself you’ll notice that the overview page looks a bit different from typical managed disks, showing information like “Managed by” and “Max Shares”.

In the properties of the disk, we can see the VM owners of that specific disk, which is exactly what we wanted to see after mounting it on each of the VMs.

Although I setup this configuration using Windows machines, you’ll notice I didn’t go into the OS. This is to say that the process, from an Azure perspective, is the same with Linux as it is with Windows VMs. Of course, it will be different within the OS, but there is nothing Azure-specific from that aspect.

Okay, here we go the Pros and Cons:

Pros:

  • Azure Shared disks allows for the use of what is considered to be “legacy clustering technology” in Azure.
  • Can be leveraged by familiar tools such as Windows Failover Cluster Manager, Scale-out File Server, and Linux Pacemaker/Corosync.
  • Premium and Ultra Disks are supported so performance shouldn’t be an issue in most cases.
  • Supports SCSI Persistent Reservations.
  • Fairly simple to setup.

Cons:

  • Does not scale well, similar to what would be expected with a SAN mapping.
  • Only certain disk types are supported.
  • ReadOnly host caching is not available for Premium SSDs with maxShares >1.
  • When using Availability Sets and Virtual Machine Scale sets, storage fault domain alignment with the VMs are not enforced on the shared data disk.
  • Azure Backup not yet supported.
  • Azure Site Recovery not yet supported.

Alright, that’s it for Azure Shared Disks! Go take a look at my Github Repository and give shared disks a shot!

Please reach out to me in the comments, LinkedIn, or Twitter with any questions or comments about this blog post or this series.