Month: November 2020

Azure Site-to-Site VPN with a Palo Alto Firewall

Posted on Updated on

Reading Time: 9 minutes

In the past, I’ve written a few blog posts about setting up different types of VPNs with Azure.

Since the market is now full of customers who are running Palo Alto Firewalls, today I want to blog on how to setup a Site-to-Site (S2S) IPSec VPN to Azure from an on-premises Palo Alto Firewall. For the content in this post I’m running PAN-OS 10.0.0.1 on a VM-50 in Hyper-V, but the tunnel configuration will be more or less the same across deployment types (though if it changes in a newer version of PAN-OS let me know in the comments and I’ll update the post).

Alright, let’s jump into it! The first thing we need to do is setup the Azure side of things, which means starting with a virtual network (vnet). A virtual network is a regional networking concept in Azure, which means it cannot span multiple regions. I’m going to use “East US” below, but you can use whichever region makes the most sense to your business since the core networking capabilities shown below are available in all Azure regions.

With this configuration I’m going to use 10.0.0.0/16 as the overall address space in the Virtual Network, I’m also going to configure two subnets. The “hub” subnet is where I will host any resources. In my case, I’ll be hosting a server there to test connectivity across the tunnel. The “GatewaySubnet” is actually a required name for a subnet that will later house our Virtual Network Gateway (PaaS VPN Appliance). This subnet could be created later in the portal interface for the Virtual Network (I used this method in my PFSense VPN blog post), but I’m creating it ahead of time. Note that this subnet is name and case sensitive. The gateway subnet does not need a full /24, (requirements for the subnet here), it will do for my quick demo environment.

Now that we have the Virtual Network deployed, we need to create the Virtual Network Gateway. You’ll notice that once we choose to deploy it in the “vpn-vnet” network that we created, it will automatically recognize the “GatewaySubnet” and will deploy into that subnet. Here we will choose a VPN Gateway type, and since I’ll be using a route-based VPN, select that configuration option. I won’t be using BGP or an active-active configuration in this environment so I’ll leave those disabled. Validate, and create the VPN Gateway which will serve as the VPN appliance in Azure. This deployment typically takes 20-30 minutes so go crab a cup of coffee and check those dreaded emails.

Alright, now that the Virtual Network Gateway is created we want to create “connection” to configure the settings needed on the Azure side for the site-to-site VPN.

Here we’ll name the connection, set the connection type to “Site-to-Site (IPSec)”, set a PSK (please don’t use “SuperSecretPassword123″…) and set the IKE Protocol to IKEv2. You’ll notice that you need to set a Local Network Gateway, we’ll do that next.

Let’s go configure a new Local Network Gateway, the LNG is a resource object that represents the on-premises side of the tunnel. You’ll need the public IP of the Palo Alto firewall (or otherwise NAT device), as well as the local network that you want to advertise across the tunnel to Azure.

Once that’s complete we can finish creating the connection, and see that it now shows up as a site-to-site connection on the Virtual Network Gateway, but since the other side isn’t yet setup the status is unknown. If you go to the “Overview” tab, you’ll notice it has the IP of the LNG you created as well as the public IP of the Virtual Network Gateway – you will want to copy this down as you’ll need it when you setup the IPSec tunnel on the Palo Alto.

Alright, things are just about done now on the Azure side. The last thing I want to do is kick off the deployment of a VM in the “hub” subnet that we can use to test the functionality of the tunnel. I’m going to deploy a cheap B1s Ubuntu VM. It doesn’t need a public IP and a basic Network Security Group (NSG) will do since there is a default rule that allows all from inside the Virtual Network (traffic sourced from the Virtual Network Gateway included).

Now that the test VM is deploying, let’s go deploy the Palo Alto side of the tunnel. The first thing you’ll need to do is create a Tunnel Interface (Network –> Interfaces –> Tunnel –> New). In accordance with best practices, I created a new Security Zone specifically for Azure and assigned that tunnel interface. You’ll note that it will deploy a sub interface that we’ll be referencing later. I’m just using the default virtual router for this lab, but you should use whatever makes sense in your environment.

Next we need to create an IKE Gateway. Since we set the Azure VNG to use IKEv2, we can use that setting here also. You want to select the interface that is publicly-facing to attach the IKE Gateway, in my case it is ethernet 1/2 but your configuration may vary. Typically you’ll have the IP address of the interface as an object and you can select that in the box below, but in my case my WAN interface is using DHCP from my ISP so I leave it as “none”.

It is important to point out though, that if your Palo Alto doesn’t have a public IP and is behind some other sort of device providing NAT, you’ll want to use the uplink interface and select the “local IP address” private IP object of that interface. I suspect this is an unlikely scenario, but I’ll call it out just in case.

The peer address is the public IP address of the Virtual Network Gateway of which we took note a few steps prior, and the PSK is whatever we set on the connection in Azure. Lastly, make sure the Liveness Check is enabled on the Advanced Options Screen.

Next we need an IPSec Crypto Profile. AES-256-CBC is a supported algorithm for Azure Virtual Network Gateways, so we’ll use that along with sha1 auth and set the lifetime to 8400 seconds which is longer than lifetime of the Azure VNG so it will be the one renewing the keys.

Now we put it all together, create a new IPSec Tunnel and use the tunnel interface we created, along with the IKE Gateway and IPSec Crypto Profile.

Now that the tunnel is created, we need to make appropriate configurations to allow for routing across the tunnel. Since I’m not using dynamic routing in this environment, I’ll go in and add a static route to the virtual router I’m using to advertise the address space we created in Azure to send out the tunnel interface.

Great! Now at this point I went ahead and grabbed the IP of the Ubuntu VM I created earlier (which was 10.0.1.4) and did a ping test. Unfortunately they all failed, what’s missing?

Yes yes, I did commit the changes (which always seems to get me) but after looking at the traffic logs I can see the deny action taking place on the default interzone security policy. Yes I could have not mentioned this, but hey, now if it doesn’t work perfectly for the first time for you – you can be assured you’re in good company.

Alright, if you recall we created the tunnel interface in its own Security Zone so I’ll need to create a Security Policy from my Internal Zone to the Azure Zone. You can use whatever profiles you need here, I’m just going to completely open interzone communication between the two for my lab environment. If you want machines in Azure to be able to initiate connections as well remember you’ll need to modify the rule to allow traffic in that direction as well.

Here we go, now I should have everything in order. Let’s go kick off another ping test and check a few things to make sure that the tunnel came up and shows connected on both sides of things. It looks like the new Allow Azure Security Policy is working, and I see my ping application traffic passing!

Before I go pull up the Windows Terminal screen I want to quickly check the tunnel status on both sides.

Success!!! Before I call it, I want to try a two more things so I’ll SSH into the Ubuntu VM, install Apache, edit the default web page and open it in a local browser.

At this point I do want to call out the troubleshooting capabilities for Azure VPN Gateway. There is a “VPN Troubleshoot” functionality that’s a part of Azure Network Watcher that’s built into the view of the VPN Gateway. You can select the gateway on which you’d like to run diagnostics, select a storage account where it will store the sampled data, and let it run. If there are any issues with the connection this will list them out for you. It will also list some specifics of the connection itself so if you want to dig into those you can go look at the files written to the blob storage account after the troubleshooting action is complete to get information like packets, bytes, current bandwidth, peak bandwidth, last connected time, and CPU utilization of the gateway. For further troubleshooting tips you can also visit the documentation on troubleshooting site-to-site VPNs with Azure VPN Gateways.

That’s it, all done! The site-to-site VPN is all setup. The VPN Gateway in Azure makes the process very easy and the Palo Alto side isn’t too bad either once you know what’s needed for the configuration.

If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!

Shared Storage Options in Azure: Part 1 – Azure Shared Disks

Posted on Updated on

Reading Time: 4 minutes

In an IaaS world, shared storage between virtual machines is a common ask. “What is the best way to configure shared storage?”, “What options do we have for sharing storage between these VMs?”, both are questions I’ve answered several times, so let’s go ahead and blog some of the options! The first part in this blog series titled “Shared Storage Options in Azure”, will cover Azure Shared Disks.

As I write subsequent posts in this series, I will update this post with the links to each of them.

 

When shared disks were announced in July of 2020, there was quite a bit of excitement in the community. There are so many applications that still leverage shared storage for things like Windows Server Failover Clustering, on which many applications are built like SQL Server Failover Cluster Instances. Also, while I highly recommend using a Cloud Witness, many customers migration workloads to Azure still rely on a shared disk for quorum as well. Additionally, many Linux applications leverage shared storage that were previously configured to use a shared virtual disk, or even RAW LUN mappings, for applications such as GFS2 or OCFS2.

Additional sample workloads for Azure Shared Disks can be found here: Shared Disk Sample Workloads.

There are a few limitations of shared disks, the list of which is constantly getting smaller. For now, though, let’s just go ahead and jump into it and see how to deploy them. After which, we’ll do a quick “Pros” and “Cons” list before moving on to the other shared storage options. I deployed Shared Disks in my lab using the portal first (screenshots below), but also created a Github Repository (https://github.com/matthansen0/azure-shared-storage-options) with the Azure PowerShell script and an ARM template to deploy a similar environment – feel free to use those if you’d like!

As a prerequisite (not pictured below) I created the following resources:

  • A Resource Group in the West US region
  • A Virtual Network with a single subnet
  • 2x D2s v3, Windows Server 2016 Virtual Machines (VM001, VM002) each with a single OS disk

Now that those are created, I deployed a Managed Disk (named “sharedDisk001”) just like you would if you were deploying a typical data disk.

On the “advanced” tab you will see the ability to configure the managed disk as a “shared disk”, here is where you set the max shares which specifies the maximum number of VMs that can attach that particular disk type.

After the disk is finished deploying, we head over to the first VM and attach an existing disk. You’ll note that the disk shows up as a “shared disk” and shows the number of shares left available on that disk. Since this is the first time it’s being mounted it shows 0.

After attaching the disk to the first VM, we head over and do the same thing on VM002. You’ll note that the number of shares has increased by 1 since we have now mounted the disk on VM001.

Great, now the disk is attached to both VMs! Heading over to the managed disk itself you’ll notice that the overview page looks a bit different from typical managed disks, showing information like “Managed by” and “Max Shares”.

In the properties of the disk, we can see the VM owners of that specific disk, which is exactly what we wanted to see after mounting it on each of the VMs.

Although I setup this configuration using Windows machines, you’ll notice I didn’t go into the OS. This is to say that the process, from an Azure perspective, is the same with Linux as it is with Windows VMs. Of course, it will be different within the OS, but there is nothing Azure-specific from that aspect.

Okay, here we go the Pros and Cons:

Pros:

  • Azure Shared disks allows for the use of what is considered to be “legacy clustering technology” in Azure.
  • Can be leveraged by familiar tools such as Windows Failover Cluster Manager, Scale-out File Server, and Linux Pacemaker/Corosync.
  • Premium and Ultra Disks are supported so performance shouldn’t be an issue in most cases.
  • Supports SCSI Persistent Reservations.
  • Fairly simple to setup.

Cons:

  • Does not scale well, similar to what would be expected with a SAN mapping.
  • Only certain disk types are supported.
  • ReadOnly host caching is not available for Premium SSDs with maxShares >1.
  • When using Availability Sets and Virtual Machine Scale sets, storage fault domain alignment with the VMs are not enforced on the shared data disk.
  • Azure Backup not yet supported.
  • Azure Site Recovery not yet supported.

Alright, that’s it for Azure Shared Disks! Go take a look at my Github Repository and give shared disks a shot!

Please reach out to me in the comments, LinkedIn, or Twitter with any questions or comments about this blog post or this series.