Skip to main content

How can we help you?

Druva Documentation

Restore prerequisites and considerations

 

Phoenix Editions: File:/tick.png Business File:/cross.png Enterprise File:/tick.png Elite

 

Considerations for restoring a virtual machine

  • Hot snapshots reside on CloudCache for a period that you specified at the time of configuring CloudCache

  • If you are a group administrator, you can only restore data to a virtual machine that belongs to an administrative group that you manage. Cloud administrators and data protection officers can restore virtual machines across groups. 

  • In the event of a network connection failure at the time of restores, backup proxies attempt to connect to Druva Cloud. After the restoration of connectivity, backup proxies restart restores from the state in which they were interrupted. 

  • If you restart or reboot backup proxy during a restore, the restore operation changes to scheduled state and resumes after the virtual machine is up and running.

  • Although Druva backs up VMX files along with the VMDK files, you can restore the VMDK files only.

  • The Restore Points follow the backup proxy pool time zone.

  • If you choose to restore a virtual disk, Druva creates a new virtual machine with minimum configuration and associates the VMDK files that you selected to it.

  • Druva supports restore of virtual machines to a different ESXi hypervisor, as well as the source hypervisor associated with a vCenter Server on which you installed the backup proxy.

  • A Thick Provisioned Lazy Zeroed disk is restored as a Thick Provisioned Lazy Zeroed disk.

  • A Thick Provisioned Eager Zeroed disk is restored as a Thick Provisioned Eager Zeroed disk.

  • Thin provisioned VMDK files are restored as Thin disks.

  • CBT status remains unchanged if a virtual machine is restored to the original location. If a virtual machine is restored to an alternate location, CBT is disabled. 

  • Druva supports restore of RDM virtual mode disks (vRDM) as VMDK files.

  • If a virtual machine is associated with disks that are configured in different modes, for example, Independent Persistent, Druva restores only those disks for which the mode is supported.

  • In case of virtual machine restore to the original location, Druva restores the virtual machine to its original network configuration. However, for a virtual disk restore or restore to an alternate location, Druva restores the virtual to the default network configuration of the ESXi where it is restored.

  • If a restore fails, the newly created virtual machine is deleted.

  • After a restore, the virtual machine is always powered off. You must manually power on the virtual machine.

  • During an ongoing restore and scheduled backup, if the client machine is restarted then jobs request may not be resent to the client machine.

  • In case of virtual machine restore to the original location, as a backup and restore cannot run in parallel on the same virtual machine, you can cancel the ongoing backup and trigger the restore request. 

  • In case of virtual machine restore to the original location, two restore requests cannot run in parallel on the same virtual machine.

  • In case of virtual machine restore to the original location, if a backup is triggered while a restore is in progress, the backup will be queued until the restore is complete.

  • In case of virtual machine restore to the alternate location, if a backup is triggered while a restore is in progress, and if backup goes to same backup proxy where restore is running, the backup will be queued until the restore is complete.

  • VMware introduced support for NVMe controllers and NVMe disks from vCenter version 6.5 and hardware version 13. With version 6.0.3-178504 of the VMware Backup proxy you can backup and restore virtual machines with NVMe disks or controllers using the HotAdd transport mode. The backup of virtual machines with NVMe disks or controllers falls back to the NBDSSL mode if the VMware Backup proxy does not have an NVMe controller.

    The following table explains the steps you need to perform if you want to protect virtual machines with NVMe disks or controllers.

    Backup proxy versions To protect VMs with NVMe disks or controllers

    Existing backup proxies at versions lower than 6.0.3-178504

    Upgrade the Backup proxy from Phoenix console.

    Note: By default, when the existing VMware backup proxies are upgraded to the latest agent version, the HotAdd transport mode backups are disabled. You must add the ENABLE_NVME_SUPPORT flag and set it to True in Phoenix.cfg file to enable it, and perform the following:

    1. Shut down the VM.
    2. Upgrade the VM's hardware version to 13 or higher.
    3. Start the VM.

    VMware Backup proxy version 6.0.3-178504

    You can backup and restore virtual machines with NVMe disks or controllers.  Restores from the new snapshots created after the agent upgrade are subject to some Limitations.

    For restores of virtual machines with NVMe controllers or disks from older snapshots, ensure that the parameter ENABLE_NVME_CONVERSION = True is enabled in the Phoenix.cfg file on the VMware Backup proxy. The restores are subject to some Limitations.

    New Backup proxy deployments and additional Backup proxy deployments

    VMware backup proxies version 6.0.3-178504 or higher support backups and restores of virtual machines with NVMe disks or controllers using the HotAdd transport mode. 

     

    Note: Before you add the parameter in the Phoenix.cfg file, you need to stop the Phoenix service, add the parameter and then restart the Phoenix service again. You need to follow this for all backup proxies in a given backup pool for restore.

Considerations for full virtual machine restore to the original location

  • If the virtual machine has new VMDKS attached at the time of restore and if they were not backed up, then all those VMDKS will be detached from the virtual machine.

  • If the virtual machine has independent disks attached, then they will be detached after the original restore.

  • If the virtual machine has detached backed up disks, then they will be renamed as ‘<original_vmdk_name>_phoenix_ <timestamp>.vmdk’.

  • If the old controllers are detached from the target virtual machine , then after restore they will be attached again but if the target virtual machine has new controllers attached then will remain the same.

  • If the type of the controllers is changed it will remain the same after restore.

  • All the user snapshots will be deleted after the original restore.

  • For vRDM disks:

    • For the backup proxy registered as VC, if vRDM detached at the time of restore then the thick disk with a new name as ‘<original_vmdk_name>_phoenix_ <timestamp>.vmdk’ will be created and vRDM disk will remain same.

    • For backup proxy registered as standalone ESX, the detached vRDM disk will be renamed as ‘<original_vmdk_name>_phoenix_ <timestamp>.vmdk’  and new thick disk will be created instead of detached vRDM disk.

Note: Restore to original location for virtual machines that have vRDM disk is supported. If vRDM disks are not detached before the restore then they are restored as vRDM.

  • If the virtual machine has added pRDM disks then it will not be detached after restore.

  • The CBT state of the virtual machine will not change.

  • Other devices of the virtual machine such as memory, CPUs, and CDROM device are not restored. Only the data on the virtual machine is restored. 

  • The restore request to the original location is queued if there is an active backup running for the same agent on the same proxy.

Considerations for file and folders restore

  • If you have CloudCache in your setup, make sure it is connected to the backup proxy.

  • The original drive names for Windows partitions and mount points for Linux partitions will not be preserved. The partition will be shown as volume0, volume1 and so on.

  • Partitions with a corrupted file system, partition table, or incorrect partition boundaries cannot be browsed or restored.

  • Symbolic links will not be recovered.

  • Sparse files on original disks will be recovered as Thick files.

  • On Linux partitions, the “/dev”, "/var/spool", "/var/run" “/proc” , /tmp, and, “/sys” folders will be excluded, if selected from the restore set.

  • On Windows partition, "CryptnetUrlCache" will be excluded if selected from restore set.

  • Encrypted volumes and files are not supported for File-level restore.

  • File-level restore may fail for the guest Windows virtual machines, if the destination folder name contains consecutive special characters. For example “A%%B”.

  • Druva does not store guest OS credential.

  • If some files are not restored, the progress log will show the number of files missed and the detailed log will list the names of the files missed.

  • File-level restore is not supported on spanned volumes if the volume is created using two disks, and one of the disk is excluded from the backup. To browse and restore from such volumes, a staging VM is required. For more information, see Required prerequisites for File Level Restores (FLR) via a staging VM. For more information on Dynamic GPT FLR support, see Support Matrix.

  • For restoring files from Dynamic GPT disk partition/volumes, you need a staging VM. 

Required prerequisites for File Level Restores (FLR)

  • The target virtual machine must be powered on.
  • Ensure that the latest version of VMware tools is installed and running on the target VM.
  • You must enter credentials that have write permissions over the restore target (CIFS, original virtual machine, or alternate virtual machine).
  • For dynamic GPT disk partitions (Windows Server backup sets) or unreadable volumes/partitions from MBR disk partitions, such as spanned volumes and the REFS file system, you can use a staging VM. For more information, see Restore files and folders
    mount error.png

Recommended prerequisites for enhanced performance of File Level Restores (FLR)

  • The VMware backup proxy must be upgraded to version 5.0.2-121723 or later.
  • Exclude the following paths on the target virtual machine from antivirus scans:
    • Windows: %ProgramData%\Phoenix\VMware\
    • UNIX: $HOME/Phoenix/VMware
  • The entered credentials must have write permissions on the target location and the paths above. 
  • Druva installs a binary called DruvaVMwareRestoreAgent.exe (Windows) or guestossvc (Unix) on the target guest operating system. This binary acts as a server for HTTPS requests from the VMware backup proxy. This process opens specified ports on the target guest operating system for communication. If the process fails to open these ports due to firewall restrictions, it uses the VMware tools API to restore files and folders on the target virtual machine.

Note: If Druva uses the VMware tools API to perform restores, there will be no observable performance gain. The performance will be similar to that of the older agents.

  • To allow the DruvaVMwareRestoreAgent.exe (Windows) or guestossvc (Unix) service to communicate on specific ports for restores, perform the following tasks:
    • Edit the phoenix.cfg file to allow this process to use specified ports for restore. For example, to use port 58142 for restores, edit the __FLR_GUEST_VM_PORT_RANGE entry to __FLR_GUEST_VM_PORT_RANGE = [58142].For more information on the phoenix.cfg file, see Druva agent logs and configuration details.
    • If you don’t enter a specific port, Druva uses the ephemeral ports over HTTPS for restores. Ensure that the last three to four ports in this range are open. 

Druva cleans up the installed files related to the DruvaVMwareRestoreAgent.exe (Windows) or guestossvc (Unix) binary; however, if the cleanup fails due to network disruptions or if the VM is powered off, you can kill the DruvaVMwareRestoreAgent.exe (Windows) or guestossvc (Unix) process from the task manager (Windows) or the terminal (Unix), and manually delete these files from the following locations:

Operating System Location
Unix $HOME/Phoenix/VMware/<JobID>
Windows $PROGRAMDATA\Phoenix\VMware\<JobID>

Required prerequisites for File Level Restores (FLR) for GPT Dynamic Disks

  • The target virtual machine must be powered on. 
  • Ensure that the latest version of VMware tools is installed and running on the target VM. 
  • You must enter credentials that have write permissions over the restore target (CIFS, original virtual machine, or alternate virtual machine). 
  • The selected staging VM must be in the same VCenter where you want to perform the FLR, i.e., target VM.
  • VMware tools must be running on the staging VM.
  • The staging VM must have Windows Server 2012 onwards.
  • Disk UUID on staging VM must be enabled. For more information, see resolution steps.
  • The staging VM must not be configured for backups.
  • After adding the staging VM on the vCenter or standalone host, refresh all VMs for the respective vCenter or standalone host on the Management Console, and assign credentials to the staging VM.  For more information, see Manage credentials for VMware servers.

Prerequisites for configuring virtual machines for Sandbox Recovery

  • You must have the appropriate license enabled. If you do not have a valid license, the Sandbox Recovery option in the VMware Restore workflow is hidden. To enable the license, contact Support. 
    For more information on Sandbox Recovery, see Restore virtual machines using sandbox.
  • VMware Tools must be running on the destination VM. During Sandbox Recovery, VMware Tools is used to interact with the VM. As no other communication channel is open for a Sandbox VM, you need to install the VMware Tools manually. For more information, see Troubleshoot VMware issues.

    Note: You can continue with the restore regardless of whether VMware Tools is present. If VMware Tools is not present, an alert warning email is sent to the administrator, and a job is created. Post-restore, a scan can start only if you install VMware Tools within 24  hours.

  • The proxy version must be 6.3.1-273739 and above.
  • Currently, only the Windows operating system is supported. 
  • Keep the Guest OS credentials handy as you need to provide these details.
  • The user credentials provided must have administrator privileges or access rights.
  • You must install the following to support scanning on Windows Server 2008r2:

Prerequisites for restore of MS SQL database from application-aware backups

  • To obtain the latest OVA templates, deploy a fresh additional backup proxy. For more information, see Deploy additional backup proxies.  
  • Provision for a Windows staging virtual machine. Your target virtual machine can be used as a staging virtual machine as well. For more information, see Windows staging virtual machine for application-aware database restores.
  • On the target virtual machine, the latest VMware tools must be running. 
  • The target virtual machine must have supported the MS SQL server running. For more information, Support Matrix
  • You must have write permission for the target location, where you want to restore the files. 
  • Ensure that you have added the global permission Host.Config.Storage to create a datastore. For more information, see Required vCenter or ESXi user permissions for backup, restore, and OVA deployment
  • On both target and source virtual machine, open two ports between range ‘49125’ to ‘65535’ for communication with staging virtual machine and target virtual machine (guest OS).  For this, update the Phoenix.cfg file. For example, update option RESTORE_PORT_RANGE = [58142, 58141].
    For more information, see Druva agent configuration details.
    If these ports are not opened, ephemeral ports between range '49125' to '65535' will be used. Ensure that the last 3-4 ports in this range are open.  The following files are injected guestossvc_restore.exe and PhoenixSQLGuestPluginRestore.exe.
  • On the target virtual machine, application discovery must be completed. (Currently, this happens when credentials are assigned and updated every 24hrs just like VM listing refresh). To manually trigger a discovery, unassign and assign the credentials.  Also, the virtual machine must be powered on. 

Note: SQL database restores from VMs in the VMware Cloud on AWS (VMC) are unsupported.

Windows staging virtual machine for SQL application-aware database restores

When the source virtual machine (MS SQL)  is backed up, Druva takes VSS snapshots of the drives where the database files reside and then backs up the entire disk attached to the source virtual machine. These disks (vmdk files) are then uploaded to Druva Cloud. 

Now while restoring, because VSS is a proprietary feature of Windows, a Windows virtual machine (staging) is required to read the disks.  A staging virtual machine is used to attach and mount the selected vmdk (disks) to this staging virtual machine, from where Druva reads SQL data for restoring to the target virtual machine.

Note: You can use the target virtual machine as a staging virtual machine as well.

The Windows staging virtual machine should meet the following criteria:

  1. The staging virtual machine must be in the same vCenter or ESXi (if standalone) as the target virtual machine. 
  2. The staging virtual machine must not be configured for backup in Druva, except when the target virtual machine is used as a staging virtual machine. 
  3. The staging virtual machine must be a Windows server at version same or higher than the source virtual machine. The staging virtual machine does not require any additional resources, it should just meet the minimum requirements of a Windows server. The staging server is not used for a resource-intensive operation but just to attach and mount the disks for Druva to read the data from.
  4. Windows Server virtual machines such as Windows Server 2016 must be used for staging and not Windows client virtual machines such as Windows 8 and 10. 
    When the disk is attached to Windows client virtual, the VSS snapshots might get deleted in some cases. Hence it is not recommended to use Windows Client virtual machine for staging location.
  5. VMware tools must be running. 
  6. Required credentials must be assigned.
    Diskpart is used to bring disks online. To run diskpart, The virtual machine must be logged in to your local Administrators group, or a group with similar permissions. For more information, see https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/diskpart#syntax
    Note: The virtual machine must be powered on when credentials are being assigned. 
  7. The staging virtual machine must be powered on. 
  8. Disk UUID on must be enabled.
    Follow the following steps to verify or enable the disk UUID: 
    1. Right-click the virtual machine and click Shut Down Guest OS.
    2. Right-click the virtual machine and click Edit Settings.
    3. Click VM Options.
    4. Expand the Advanced section and click Edit Configuration.
      Verify if the parameter disk.EnableUUID is set. If it is, then ensure that it is set to TRUE.
      If the parameter is not set, follow the steps: 
      1. Click Add Row.
      2. In the Name column, type disk.EnableUUID.
      3. In the Value column, type TRUE.
      4. Click OK and click Save.
    5. Power on the virtual machine.
      For more information, see https://kb.vmware.com/s/article/50121797
  9. The staging virtual machine must not be cloned from the source virtual machine(whose backup is done). 
  10. If there are any VSS snapshots on the staging virtual machine then remove them before running the job.
  11. The staging virtual machine should not have GPT disks as they may cause problems with attaching and reading data from the new disks.
  12. Ensure that the limit of attaching the NFS datastore is not exhausted. Druva attaches a NFS datastore to the parent ESX host where the staging virtual machine resides. Thus, it is recommended you review the best practices for running VMware vsphere on NFS to understand the limitations and tunable configuration for mounting NFS volumes on an ESXi host. 
  13. Add an exception for antivirus scans for the following path on the staging and target virtual machine:
    %ProgramData%\Phoenix\VMWARE\

Considerations for restore of MS SQL database from application-aware backups

  1. Not supported for VMware cloud (VMC) because NFS datastore is not supported by VMware cloud. 
  2. You cannot use the same staging or target virtual for multiple restore jobs. The next restore job is allowed only after the previous job is finished.
  3. You can cancel a restore job till the files are downloaded and before you attach them to the target database.
  4. MS SQL database restores from VMs with NVMe controllers or disk are not supported. Full VM restores having app-aware policy configured will restore such databases as part of VM restores.
  5. In case of VMware client service restart (backup proxy restart), the following cleanup activities are performed: 
    1. NFS datastore attached to the host is removed
    2. Disks attached to the staging VM are removed.
    3. Processes running inside the staging and target VMs are not cleaned up.
    4. Files (guestossvc_restore.exe and PhoenixSQLGuestPluginRestore.exe) injected inside the staging virtual machine and target virtual machine are not cleaned up.
    5. DB files copied before the service restart on the target virtual machine are not cleaned up.
  6. If the restore job fails, you may need to manually clean up the files from the staging and target virtual machines from the following location:
    %ProgramData%\Phoenix\VMWARE\<jodID>
  7. The job summary will display default values, for job details review the progress logs. 
  8. During restore when the disks are attached to the staging virtual machine, all the disks are brought online. If one or more disks don’t come online Druva still proceeds with the restore. Only if the data resides on a disk that failed to come online, then the restore job fails. 
  9. During restore it is recommended that a full backup job is not running on the target virtual machine. At times this may cause restore failures.
  10. For full VM restore, it is recommended that you simply upgrade the backup proxy to version 4.9.2 or higher.

Restore Virtual Machines