Skip to main content

How can we help you?

Druva Documentation

NAS Jobs page

Phoenix Editions: File:/tick.png Business File:/cross.png Enterprise File:/tick.png Elite

 

Jobs

The Jobs page within the Protect > NAS page in the Phoenix Management Console displays a list of all backup, restore, and log request NAS jobs. The Jobs page shows the progress of each NAS job. Job details are available for all jobs where you can see a summary of the job and the logs associated with it.

To access the NAS jobs page, perform the following tasks:

  1. Log in to the Phoenix Management Console.

  2. On the menu bar click All Organizations, and then select the organization that has the NAS device.

  3. On the menu bar click Protect > NAS.

  4. In the navigation pane on the left, click Jobs. The right pane displays the following data:

Field Description
Job ID A unique identification number associated with the job.
Job Type The type of job operation, such as Backup, Restore, and Log Request.
Share Name Name of the NAS share whose Backup, Restore, and Log Request job was executed.
Backup Set The name of the NAS backup set for which the job was initiated.
Start Time The time when the job started.
End Time The time when the job ended.
Status The status of the job. For more information, see Job Status.

Filters

You can filter jobs by Job Type, Job Status, and Started in

Job Filters.png

You can filter jobs by the following Job Types:

  • Backup
  • Restore
  • Log Request

You can filter jobs by Job Status:

The Status filter allows you to filter jobs based on their statuses.

  • Queued: The triggered jobs that are waiting to be processed.
  • Running: The jobs that are being processed.

  • Successful: The jobs processed without any errors.

  • Successful with errors: The jobs that are processed but have some errors.

  • Failed: The jobs that were not processed successfully.

Note: A failed job is displayed with an error code. Hover on the error code to view the error message. The error code hyperlink redirects to the exact error message description on the documentation portal.

  • Canceled: The jobs that were canceled.

  • Waiting for Retry: The jobs that failed in the previous attempt and waiting to be processed again.

  • Skipped: The jobs that did not start within the scheduled window because another job is in progress.

  • Backup window expired: The list of jobs Druva Phoenix could not complete because the entire data set was not backed up within the specified duration, and a restore point was not created.

You can filter jobs that Started In the:

  • Last 24 hours
  • Last 7 days
  • Last 1 month

The Job ID and the Backup Set values are clickable. Clicking the Job ID value for a NAS job takes you to the job details page. Clicking the value in the Backup Set tab takes you to the backup set details page. 

From the Jobs page within the Protect > NAS page, click the Job ID of the job whose details you want to view.  The jobs details page is divided into the Summary tab and the Logs tab.

Summary

The Summary tab is divided into the following sections:

job.png

Job Details Data Estimation
Data Transfer Environment Details

Job Details

Job details section has the following fields:

Field Description
Job ID The unique identification number associated with the job.
Backup Content The content backed up from the NAS share as part of the applied content rule or custom content.
Start Time Time when the job started
Pre Script Status

The status of the execution of the pre-backup script.

If the pre-backup script is enabled, the status can be one of the following:

  • Successful: The pre-backup script execution succeeded.
  • Failed: The pre-backup script execution failed.
  • Skipped: The pre-backup script execution was skipped because the  pre-backup script was unavailable at the configured location.
  • Failed with timeout: The pre-backup script execution failed on time out.

If the pre-backup script is disabled, the status is:

  • Not enabled: The pre-backup script was disabled for execution.
NAS device The NAS device that hosts the NAS share
Job Type The type of job that was executed
Backup Policy The Backup Policy associated with the backup set.
End Time Time when the job ended.
Post Script Status

The status of the execution of the post-backup script.

If the post-backup script is enabled, the status can be one of the following:

  • Successful: The post-backup script execution succeeded.
  • Failed: The post-backup script execution failed.
  • Skipped: The post-backup script execution was skipped because the  post-backup script was unavailable at the configured location.
  • Failed with timeout: The post-backup script execution failed on time out.

If the post-backup script is disabled, the status is:

  • Not enabled: The post-backup script was disabled for execution.
Scan Mode The type of scan conducted which is either a Folder Walk or a Smart Scan. 
For Isilon devices, a scan of type Native Snapshot (Isilon) is performed.
Share Name The name of the NAS share which was backed up.
NAS Proxy The NAS proxy used for the backup or restore job.
Status Completion status of the job.

Data Estimation

The data estimation section has the following fields:

Field Description
Source Data Scanned The amount of data scanned at the source for the backup
#Files Removed The total number of files that were removed as compared to the previous backup.
#Files Scanned The total number of files scanned.
#Files Added The total number of files that were added as compared to the previous backup
#Files Changed The total number of files that changed as compared to the previous backup.
Time Taken for Estimation The approximate time taken to scan the data. The time taken for estimation does not include network retry time.
File Size Distribution

The percentage of the files backed up based on their sizes. The percentages are marked with the following file size ranges:

0-1KB

>1-10KB

>10-100KB

>100KB-1MB

>1-16MB

>16MB

Data Transfer

The Data Transfer section has the following details:

Field Description
Data Transferred to CloudCache The incremental data that is directly uploaded to CloudCache after deduplication and compression.
Backup Duration The total time taken to upload data to Cloud and CloudCache. Backup duration excludes the estimation time, network retry time, and the waiting for retry time.
Backup Speed The rate at which the source data is scanned for backup.
Bandwidth consumed The bit rate to transfer data to cloud and CloudCache.
Data Transferred to Cloud The incremental data is directly uploaded to Cloud after deduplication and compression.

Environment Details

The environment details section has the following details:

Field Description
Disk Read Rate The bit rate for reading I/O by the agent. The Disk Read Rate is an average of the total data read and the total time taken to read data at various intervals for all drives where data is distributed. For UNC Shares, the Disk Read Rate is the average value of drive(s).
# Network Retries The number of network retry attempts made within a job session.
Network Retry Duration The total time spent in network retries. It is a cumulative of all network sessions duration.

Logs

Log files help analyze and troubleshoot issues encountered while performing a task. You can share the logs with Druva Technical Support to aid in issue resolution.

Jobs - Logs.png

Progress Logs

Displays the progress logs of the job.

Detailed Logs

The detailed logs pane gives you the option to Request Detailed Logs. Detailed logs are available only after the upload job completes. Till then, the Request Detailed Logs button remains disabled. The logs are available for download till a certain date and time specified under the Available for download till field. The Requested on field shows you when the request for detailed logs was made.

Upload requested logs

The log files are used to analyze and troubleshoot the issues you might encounter while performing a task. For assistance in resolving the issues, you can share the log files with technical support.

Note Process logs are only available for backup and restore jobs.

Detailed logs for jobs are available for Phoenix agent version 3.4 and later and GovCloud client version 4.0 and later. The detailed logs include the following:

Common logs for Window, Linux, and backup proxy Window logs Linux logs Backup proxy logs
Druva Phoenix Config Window event/Application logs Dmesg logs VMware logs
Agent-specific Logs VSS information System information  
Main Service logs      

The procedure varies depending on the Phoenix agent version you are using. If a job is executed with Phoenix agent version 3.4 to 4.5, then the Request Server Logs option will fetch the consolidated job logs available for that server. However, if a job is executed with Phoenix agent version 4.6 and later, the Request Job Logs option will fetch logs only for that particular job. You can request job logs within 30 days of triggering the job. You must download the requested logs within 7 days of triggering the request.

NoteIf you execute a job on Phoenix agent 4.5 or earlier and then upgrade Phoenix agent to version 4.6 or later before downloading the logs, you will see the Request Job Logs option on the Detailed Logs tab of the Job Details page. However, clicking this button will still fetch the server logs for the job executed before the client upgrade.

You can request job logs within 30 days of triggering the job. You must download the requested logs within 7 days of triggering the request.If the log file is 4.5 MB or smaller, you can send it to technical support as an email attachment.  If the log file exceeds 4.5 MB in size, perform the following tasks to send the logs to support:

  1. Go to https://upload.druva.com/ 
  2. Enter the case number in the Ticket Number field.
  3. Click Choose File, and add the compressed files to upload.
  4. Click Upload. Notify the support engineer that the logs have been uploaded on the portal by responding to the ongoing support ticket email.

 

  • Was this article helpful?