Considerations for restoring a virtual machine
Hot snapshots reside on Phoenix CloudCache for a period that you specified at the time of configuring Phoenix CloudCache.
If you are a group administrator, you can only restore data to a virtual machine that belongs to an administrative group that you manage. Cloud administrators and data protection officers can restore virtual machines across groups.
In the event of a network connection failure at the time of restores, backup proxies attempt to connect to Druva Cloud. After the restoration of connectivity, backup proxies restart restores from the state in which they were interrupted.
If you restart or reboot backup proxy during a restore, the restore operation changes to scheduled state and resumes after the virtual machine is up and running.
Although Druva Phoenix backs up VMX files along with the VMDK files, you can restore the VMDK files only.
The Restore Points follow the backup proxy pool time zone.
If you choose to restore a virtual disk, Druva Phoenix creates a new virtual machine with minimum configuration and associates the VMDK files that you selected to it.
Druva Phoenix supports restore of virtual machines to a different ESXi hypervisor, as well as the source hypervisor associated with a vCenter Server on which you installed the backup proxy.
A Thick Provisioned Lazy Zeroed disk is restored as a Thick Provisioned Lazy Zeroed disk.
A Thick Provisioned Eager Zeroed disk is restored as a Thick Provisioned Eager Zeroed disk.
Thin provisioned VMDK files are restored as Thin disks.
CBT status remains unchanged if a virtual machine is restored to the original location. If a virtual machine is restored to an alternate location, CBT is disabled.
Druva Phoenix supports restore of RDM virtual mode disks (vRDM) as VMDK files.
If a virtual machine is associated with disks that are configured in different modes, for example, Independent Persistent, Druva Phoenix restores only those disks for which the mode is supported.
In case of virtual machine restore to the original location, Druva Phoenix restores the virtual machine to its original network configuration. However, for a virtual disk restore or restore to an alternate location, Druva Phoenix restores the virtual to the default network configuration of the ESXi where it is restored.
If a restore fails, the newly created virtual machine is deleted.
After a restore, the virtual machine is always powered off. You must manually power on the virtual machine.
During an ongoing restore and scheduled backup, if the client machine is restarted then jobs request may not be resent to the client machine.
In case of virtual machine restore to the original location, as a backup and restore cannot run in parallel on the same virtual machine, you can cancel the ongoing backup and trigger the restore request.
In case of virtual machine restore to the original location, two restore requests cannot run in parallel on the same virtual machine.
In case of virtual machine restore to the original location, if a backup is triggered while a restore is in progress, the backup will be queued until the restore is complete.
In case of virtual machine restore to the alternate location, if a backup is triggered while a restore is in progress, and if backup goes to same backup proxy where restore is running, the backup will be queued until the restore is complete.
Considerations for full virtual machine restore to the original location
If the virtual machine has new VMDKS attached at the time of restore and if they were not backed up, then all those VMDKS will be detached from the virtual machine.
If the virtual machine has independent disks attached, then they will be detached after the original restore.
If the virtual machine has detached backed up disks, then they will be renamed as ‘<original_vmdk_name>_phoenix_ <timestamp>.vmdk’.
If the old controllers are detached from the target virtual machine , then after restore they will be attached again but if the target virtual machine has new controllers attached then will remain the same.
If the type of the controllers is changed it will remain the same after restore.
All the user snapshots will be deleted after the original restore.
For vRDM disks:
For the backup proxy registered as VC, if vRDM detached at the time of restore then the thick disk with a new name as ‘<original_vmdk_name>_phoenix_ <timestamp>.vmdk’ will be created and vRDM disk will remain same.
For backup proxy registered as standalone ESX, the detached vRDM disk will be renamed as ‘<original_vmdk_name>_phoenix_ <timestamp>.vmdk’ and new thick disk will be created instead of detached vRDM disk.
Note: Restore to original location for virtual machines that have vRDM disk is supported. If vRDM disks are not detached before the restore then they are restored as vRDM.
If the virtual machine has added pRDM disks then it will not be detached after restore.
The CBT state of the virtual machine will not change.
Other devices of the virtual machine such as memory, CPUs, and CDROM device are not restored. Only the data on the virtual machine is restored.
The restore request to the original location is queued if there is an active backup running for the same agent on the same proxy.
Considerations for file and folders restore
The original drive names for Windows partitions and mount points for Linux partitions will not be preserved. The partition will be shown as volume0, volume1 and so on.
Partitions with the corrupted file system, partition table or incorrect partition boundaries may not get mounted.
Symbolic links will not be recovered.
File-level restore is not supported on GPT partitioned dynamic disk on Windows.
Sparse files on original disks will be recovered as Thick files.
On Linux partitions, the “/dev”, "/var/spool", "/var/run" “/proc” , /tmp, and, “/sys” folders will be excluded, if selected from the restore set.
On Windows partition, "CryptnetUrlCache" will be excluded if selected from restore set.
If you have CloudCache in your setup, make sure it is connected to the backup proxy.
Encrypted volumes and files are not supported for File-level restore.
File-level restore may fail for the guest Windows virtual machines, if the destination folder name contains consecutive special characters. For example “A%%B”.
Druva Phoenix does not store guest OS credential.
If some files are not restored, the progress log will show the number of files missed and the detailed log will list the names of the files missed.
File-level restore is not supported on spanned volumes if the volume is created using two disks and one of the disks is excluded from the backup.
Required prerequisites for File Level Restores (FLR)
- The target virtual machine must be powered on.
- Ensure that the latest version of VMware tools are installed on the target VM.
- You must enter credentials that have write permissions over the restore target (CIFS, original virtual machine, or alternate virtual machine). If you are restoring to a virtual machine, the target virtual machine must be powered on while you are entering credentials in the File Restore window.
Recommended prerequisites for enhanced performance of File Level Restores (FLR)
- The VMware backup proxy must be upgraded to version 5.0.2-121723 or later.
- Exclude the following paths on the target virtual machine from antivirus scans:
- Windows: %ProgramData%\Phoenix\VMware\
- UNIX: $HOME/Phoenix/VMware
- The entered credentials must have write permissions on the target location and the paths above.
- Druva Phoenix installs a binary called guestossvc_restore.exe (Windows) or guestossvc (Unix) on the target guest operating system. This binary acts as a server for HTTPS requests from the VMware backup proxy. This process opens specified ports on the target guest operating system for communication. If the process fails to open these ports due to firewall restrictions, it uses the VMware tools API to restore files and folders on the target virtual machine.
Note: If Druva Phoenix uses the VMware tools API to perform restores, there will be no observable performance gain. The performance will be similar to that of the older agents.
- To allow the guestossvc_restore.exe (Windows) or guestossvc (Unix) service to communicate on specific ports for restores, perform the following tasks:
- Edit the phoenix.cfg file to allow this process to use specified ports for restore. For example, to use port 58142 for restores, edit the __FLR_GUEST_VM_PORT_RANGE entry to __FLR_GUEST_VM_PORT_RANGE = .For more information on the phoenix.cfg file, see Druva Phoenix agent logs and configuration details.
- If you don’t enter a specific port, Druva Phoenix uses the ephemeral ports over HTTPS for restores. Ensure that the last three to four ports in this range are open.
Druva Phoenix cleans up the installed files related to the guestossvc_restore.exe (Windows) or guestossvc (Unix) binary; however, if the cleanup fails due to network disruptions or if the VM is powered off, you can kill the guestossvc_restore.exe (Windows) or guestossvc (Unix) process from the task manager (Windows) or the terminal (Unix), and manually delete these files from the following locations:
Prerequisites for restore of MS SQL database from application-aware backups
- To obtain the latest OVA templates, deploy a fresh additional backup proxy. For more information, see Deploy additional backup proxies.
- Provision for a Windows staging virtual machine. Your target virtual machine can be used as a staging virtual machine as well. For more information, see Windows staging virtual machine for application-aware database restores.
- On the target virtual machine, the latest VMware tools must be running.
- The target virtual machine must have supported the MS SQL server running. For more information, Support Matrix.
- You must have write permission for the target location, where you want to restore the files.
- Ensure that you have added the global permission Host.Config.Storage to create a datastore. For more information, see Required vCenter or ESXi user permissions for backup, restore, and OVA deployment.
- On both target and source virtual machine, open two ports between range ‘49125’ to ‘65535’ for communication with staging virtual machine and target virtual machine (guest OS). For this, update the Phoenix.cfg file. For example, update option RESTORE_PORT_RANGE = [58142, 58141].
For more information, see Druva Phoenix agent configuration details.
If these ports are not opened, ephemeral ports between range '49125' to '65535' will be used. Ensure that the last 3-4 ports in this range are open. The following files are injected guestossvc_restore.exe and PhoenixSQLGuestPluginRestore.exe.
- On the target virtual machine, application discovery must be completed. (Currently, this happens when credentials are assigned and updated every 24hrs just like VM listing refresh). To manually trigger a discovery, unassign and assign the credentials. Also, the virtual machine must be powered on.
Windows staging virtual machine for SQL application-aware database restores
When the source virtual machine (MS SQL) is backed up, Druva Phoenix takes VSS snapshots of the drives where the database files reside and then backs up the entire disk attached to the source virtual machine. These disks (vmdk files) are then uploaded to Druva Cloud.
Now while restoring, because VSS is a proprietary feature of Windows, a Windows virtual machine (staging) is required to read the disks. A staging virtual machine is used to attach and mount the selected vmdk (disks) to this staging virtual machine, from where Druva Phoenix reads SQL data for restoring to the target virtual machine.
Note: You can use the target virtual machine as a staging virtual machine as well.
The Windows staging virtual machine should meet the following criteria:
- The staging virtual machine must be in the same vCenter or ESXi (if standalone) as the target virtual machine.
- The staging virtual machine must not be configured for backup in Druva Phoenix.
- The staging virtual machine must be a Windows server at version same or higher than the source virtual machine. The staging virtual machine does not require any additional resources, it should just meet the minimum requirements of a Windows server. The staging server is not used for a resource-intensive operation but just to attach and mount the disks for Druva Phoenix to read the data from.
- Windows server virtual machines must be used for staging and not client virtual machines.
When the disk is attached to Windows client virtual the VSS snapshots might get deleted in some cases. Hence it is not recommended to use Windows Client virtual machine for staging location.
- VMware tools must be running.
- Required credentials must be assigned.
Diskpart is used to bring disks online. To run diskpart, The virtual machine must be logged in to your local Administrators group, or a group with similar permissions. For more information, see https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/diskpart#syntax
Note: The virtual machine must be powered on when credentials are being assigned.
- The staging virtual machine must be powered on.
- Disk UUID on must be enabled.
Follow the following steps to verify or enable the disk UUID:
- Right-click the virtual machine and click Shut Down Guest OS.
- Right-click the virtual machine and click Edit Settings.
- Click VM Options.
- Expand the Advanced section and click Edit Configuration.
Verify if the parameter disk.EnableUUID is set. If it is, then ensure that it is set to TRUE.
If the parameter is not set, follow the steps:
- Click Add Row.
- In the Name column, type disk.EnableUUID.
- In the Value column, type TRUE.
- Click OK and click Save.
- Power on the virtual machine.
For more information, see https://kb.vmware.com/s/article/50121797
- The staging virtual machine must not be cloned from the source virtual machine(whose backup is done).
- If there are any VSS snapshots on the staging virtual machine then remove them before running the job.
- The staging virtual machine should not have GPT disks as they may cause problems with attaching and reading data from the new disks.
- Ensure that the limit of attaching the NFS datastore is not exhausted. Druva Phoenix attaches a NFS datastore to the parent ESX host where the staging virtual machine resides. Thus, it is recommended you review the best practices for running VMware vsphere on NFS to understand the limitations and tunable configuration for mounting NFS volumes on an ESXi host.
- Add an exception for antivirus scans for the following path on the staging and target virtual machine:
Considerations for restore of MS SQL database from application-aware backups
- Not supported for VMware cloud (VMC) because NFS datastore is not supported by VMware cloud.
- You cannot use the same staging or target virtual for multiple restore jobs. The next restore job is allowed only after the previous job is finished.
- You can cancel a restore job till the files are downloaded and before you attach them to the target database.
- In case of VMware client service restart (backup proxy restart), the following cleanup activities are performed:
- NFS datastore attached to the host is removed
- Disks attached to the staging VM are removed.
- Processes running inside the staging and target VMs are not cleaned up.
- Files (guestossvc_restore.exe and PhoenixSQLGuestPluginRestore.exe) injected inside the staging virtual machine and target virtual machine are not cleaned up.
- DB files copied before the service restart on the target virtual machine are not cleaned up.
- If the restore job fails, you may need to manually clean up the files from the staging and target virtual machines from the following location:
- The job summary will display default values, for job details review the progress logs.
- During restore when the disks are attached to the staging virtual machine, all the disks are brought online. If one or more disks don’t come online Druva Phoenix still proceeds with the restore. Only if the data resides on a disk that failed to come online, then the restore job fails.
- During restore it is recommended that a full backup job is not running on the target virtual machine. At times this may cause restore failures.
- For full VM restore, it is recommended that you simply upgrade the backup proxy to version 4.9.2 or higher.