Backup
Delete Azure Recovery Vault Backups Immediately
Reading Time: 3 minutesIf you’re like many others, over the past few months you’ve noticed that if you configure Azure Backup, you can’t delete the vault for 14 days until after you stop backups. This is due to Soft Delete for Azure Backup. It doesn’t cost anything to keep those backups during that time, and it’s honestly a great safeguard to accidentally deleting backups and gives the option to “undelete”. Though, in some cases (mostly in lab environments) you just want to clear it out (or as was affectionately noted by a colleague of mine, “nuke it from orbit”). Let’s walk though how to do that real quickly.
When you go and stop backups and delete the data you’ll get the warning “The restore points for this backup item have been deleted and retained in a soft delete state” and you have to wait 14 days to fully delete those backups, you’ll also get an email alert letting you know.


To remove these backups immediately we need to disable soft delete, which is a configuration setting in the Recovery Services Vault. DO NOT DO THIS UNLESS YOU ABSOLUTELY MUST. As previously noted, this is a greats safeguard to have in place, and I would also suggest using ARM Resource Locks in production environments in addition to soft delete. If you’re sure though, we can go turn it off.

Alright, now that we’ve disabled Soft Delete for the vault, we have to commit the delete operation again. This means first we’ll need to “undelete” the backup, then delete it again which this time won’t be subject to the soft delete policy.

Now we can go delete it again, after which we can find that there are no backup items in the vault.


Success!!! The backup is fully deleted. So long as there are no other dependencies (policies, infrastructure, etc.) you can now delete the vault.
If you have any questions or suggestions for future blog posts feel free to comment below, or reach out to me via email, twitter, or LinkedIn.
Thanks!
Changing Azure Recovery Services Vault to LRS Storage
Reading Time: 2 minutesBack in the classic portal with backup services it was an easy fix. Simply change the settings value of storage replication type. I’ve recently started moving my workloads to recovery serveries vaults in ARM, and noticed something peculiar. By default, the storage replication type of the vault is GRS.
If your needs require geographically redundant storage, that that’s perfectly fine. I however don’t have such needs, and trust in Microsoft’s ability to keep data generally available in a LRS replication topology. It should be just like it was in classic, as an option anyways, right? Strangely, the option to change the replication type for the storage configuration on the vault is grayed out.
Odd, right? I thought so, until I found this.
Okay, well it’s not optimal but it looks like I need to remove the backup data from the vault to change the storage replication types right? Well, I gave that a shot and no go. I had the same issue, the option was still grayed out.
I ultimately had to completely delete, and create a new recovery services vault. Once it’s initially created you can change the replication type.
Ah, finally! Then register the VM(s), run some backup jobs and voila! Confirmation that the vault is using LRS storage.
I hope this makes your day at least a little bit easier.
Thanks,
VMware Shared Raw Device Mapped Disk
Reading Time: 3 minutesThe purpose of this configuration is to decrease the time for large SQL backups in VMware virtual machines that are being backed up by VEEAM. In our scenario we have a SQL server and a File Server. We want to mount this in physical compatibility mode on the SQL server, to increase backup time by contacting the LUN on the SAN directly. Since RDM disks are independent, we want to mount the same volume in virtual compatibility mode on the FileServer so that it can be backed up by VEEAM.
For further detail on RDM, please reference the following documentation.
http://www.vmware.com/pdf/esx25_rawdevicemapping.pdf
1. Configuring the SQL Server RDM in Physical Compatibility Mode
Here are the general steps to configuring RDM for physical and virtual compatibility mode.
- Create a LUN on the backend storage device.
- Rescan for storage devices in to confirm the LUN shows up correctly, for documentation I’m using a 15GB volume.
- Once you’ve created that, go add a new hard disk. When you choose your disk type, choose “Raw Device Mappings”, and then select the LUN that was created earlier.
- Next choose a datastore that’s on the SAN that other VMs can access.
- Select a new virtual device node that resides on a new SCSI controller. I picked SCSI (3:0). Upon doing that a new SCSI controller will be created, then finish creating the disk.
- You must now change the newly created SCSI controller type to “LSI Logic SAS” and change the “SCSI Bus Sharing” to “Physical”.
2. Configuring the File Server RDM in Virtual Compatibility Mode
At this point, we’ve now created a LUN and created a RAW mapping to the SQL virtual machine. Now it needs mapped to the File Server virtual machine so it can be picked up by the VEEAM backup.
- Edit the settings of the File Server virtual machine, and add a new hard disk.
- When creating this new hard disk, select “Use an existing virtual disk” and point to the datastore where the RDM was mapped in the last step.
- Choose a Virtual Device node that is on a difference SCSI controller than the other disks, I choose SCSI (3:2).
- You must now change the newly created SCSI controller type to “LSI Logic SAS” and change the “SCSI Bus Sharing” to “Physical”.
At this point, we’ve now created a LUN that has been mapped RAW to a SQL Server. That SQL server can perform it’s backups to that disk which increases backup times by about 20% based in my testing. The File Server virtual machine and the SQL Server virtual machine both now have SCSI adapters that have bus sharing enabled, and thusly the disk is also mapped to the File Server. It is mapped here in virtual compatibility mode (inherent by adding an “existing virtual disk”). This means it’s persistent and can be backed up by VEEAM.
I hope I’ve made your day, at least a little bit easier.
Config File Iteration Backup – Change Checking Config Files
Reading Time: 2 minutesIn a lot of environments that have developers that use a lot of config files, sometimes it would be nice to keep older versions of those files. Fortunately Microsoft has graced us with shadow copies so we can have “Previous Versions”. The only issue with that, is you can only can’t turn on shadow copies (as far as I know) for specific files. So what I did was write a powershell script to take care of that, in a round-about way.
What this script does, is wait until the file has been modified then copy it to an “archive” location and time stamp it so you can review older copies.
At the beginning of this script there are two arrays that include variables of full paths to the files. “$OriginalPath” is the array that holds the full path to each file you want to watch. In the script here the two files I’m watching are “C:\configs\config1.txt” and “C:\configs\config2.txt”. Then the second array is where you want to archive the files to. In the script here it’s “C:\archive_configs\config1.txt” and “C:\archive_configs\config2.txt”.
What’s done after the arrays are initialize, is the time -1 minute and compares the last write value of the file in question to the current time -1 minute. If it has been modified, it copies to the archive location then modifies the name with a time stamp. Then loops back through if there are more files being checked in the array.
What I’ve done is put this in Task Scheduler to run every 1 minute. If you want to modify that, take the line:
$1MinAgo = (get-date).AddMinutes(-1)
and you the “Minutes” portion and the “(-1)” portion can both be modified.
https://gallery.technet.microsoft.com/Config-File-Iteration-ab2a69df
I hope I’ve made your day a little bit easier!
STOP Blue Screen Error on VMWare when using WinPE or WAIK
Video Posted on
Reading Time: < 1 minuteThis past weekend I was invoking my disaster recovery plan for a system of mine and I went to boot the .iso to run the restore (CA ArcServ D2D Bootkit) and I kept on getting this error. Under the gun of pressure as the production hours quickly approached I had to figure it out.
*** STOP: 0x0000005D (0x000000000FABBBFF, 0x0000000000000000, 0x0000000000000000,0x0000000000000000)
Of course this is extremely frustrating when in a DR situation. So here is the quick, and simple answer.
This error occurs when you have the machine you’ve created in VMWare set to a 32-bit architecture, while attempting to boot into a 64-bit environment. Power down your VM, edit the settings like shown below to x64 and you’ll be all set!
Now you’ll be able to boot up with no issues at all. I hope I’ve made your day at least a little bit easier!
Symantec Backup Completed with Exceptions oem13.inf
I recently was given this error in a backup that was leveraging Symantec Backup Exec 2010 R2. I noticed that it wasn’t failing but was “Completing with Exceptions”. Upon investigation of the job log I found the errors above, and below.
Upon research I found that in this version of Backup Exec (13.0) against this version of Windows (2008 R2) the VSS looks for the two files when they are not there — then fails and says they were not included in the backup.
Fantastic. Easy fix. There are two ways you can do this. One, is that you go into “C:\Windows\INF\” and make a blank text file and name it oem13.inf and then again naming it oem14.inf. The operating system won’t ever utilize it, but it will calm the unwarranted errors in Backup Exec.
The other way to remedy this is to add two simple exceptions into the backup.
Launch the backup exec console, find your job in “Job Monitor” and edit the include/exclude under Source –> Selections. Add the path “C:\Windows\INF” and the file “OEM13.INF” then do this again for “OEM14.INF” like above.
All things considered, a very easy fix. I prefer the second option so that you’re not cluttering the critical areas of the file system.
Hope I’ve made your day a little easier!