Skip to main content

How can we help you?

Druva Documentation

NAS Jobs page

Enterprise Workloads Editions: File:/tick.png Business File:/cross.png Enterprise File:/tick.png Elite

 

Jobs

The Jobs page within the Protect > NAS page in the Management Console displays a list of all backup, restore, and log request NAS jobs. The Jobs page shows the progress of each NAS job. Job details are available for all jobs where you can see a summary of the job and the logs associated with it.

To access the NAS jobs page, perform the following tasks:

  1. Log in to the Management Console.

  2. Select the workload from the Protect menu. Note that if the All Organizations menu is enabled, you have to first select an organization that has your NAS device and then select the workload.

  3. In the navigation pane on the left, click All Jobs. The right pane displays the following data:

Field Description
Job ID A unique identification number associated with the job.
Job Type The type of job operation, such as Backup, Restore, and Log Request.
Share Name Name of the NAS share whose Backup, Restore, and Log Request job was executed.
Backup Set The name of the NAS backup set for which the job was initiated.
Start Time The time when the job started.
End Time The time when the job ended.
Status The status of the job. For more information, see Job Status.

Filters

You can filter jobs by Job Type, Job Status, and Started in

Note: You can resize a column on the All Jobs table by dragging it to the desired location.

Jobs_NAS1.png

You can filter jobs by the following Job Types:

  • Backup
  • Restore
  • Defreeze
  • Log Request

You can filter jobs by Job Status:

The Status filter allows you to filter jobs based on their statuses.

  • Queued (1): The triggered jobs that are waiting to be processed.
  • Running (2): The jobs that are being processed.

  • Successful (3): The jobs processed without any errors.

  • Successful with errors (4): The jobs that are processed but have some errors.

  • Failed (5): The jobs that were not processed successfully.

Note: A failed job is displayed with an error code. Hover on the error code to view the error message. The error code hyperlink redirects to the exact error message description on the documentation portal.

  • Canceled (6): The jobs that were canceled.

  • Waiting for Retry (7): The jobs that failed in the previous attempt and waiting to be processed again.

  • Skipped (8): The jobs that did not start within the scheduled window because another job is in progress.

  • Backup window expired (9): The list of jobs Druva could not complete because the entire data set was not backed up within the specified duration, and a recovery point was not created.

  • Scanning (13): The jobs that are still being scanned. 

  • Scan failed (14): The jobs for which the scan has failed. 

You can filter jobs that Started In the:

  • Last 24 hours
  • Last 7 days
  • Last 1 month
  • Custom Date
    Note: You can choose a specific date range.

The Job ID and the Backup Set values are clickable. Clicking the Job ID value for a NAS job takes you to the job details page. Clicking the value in the Backup Set tab takes you to the backup set details page. 

Within the Protect > NAS page, in the left navigation pane, select All Jobs and then in the right pane click the Job ID of the job whose details you want to view.  The jobs details page is divided into the Summary tab and the Logs tab.

Customize table columns

You can customize the table columns using the Customize Table Columns icon.

NAS job.png

You can click the Customize Table Columns icon to get a list of all the columns on the page:

  • Select the checkbox to display the column. 
  • Clear the checkbox to hide a column on the All Jobs table. Removing unnecessary columns will make the UI more spacious.
  • Move a column to change the order. The change is reflected in the All Jobs table.

customize table column filterNAS.png
The column configuration persists across sessions in the same browser.

Summary

The Summary tab is divided into the following sections:

job.png

Job Details Data Estimation
Data Transfer Environment Details

Job Details

Job details section has the following fields:

Field Description
Job ID The unique identification number associated with the job.
Backup Content The content backed up from the NAS share as part of the applied content rule or custom content.
Start Time Time when the job started

Pre Script Status

The status of the execution of the pre-backup script.

If the pre-backup script is enabled, the status can be one of the following:

  • Successful (1): The pre-backup script execution succeeded.
  • Failed (2): The pre-backup script execution failed.
  • Skipped (3): The pre-backup script execution was skipped because the  pre-backup script was unavailable at the configured location.
  • Failed with timeout (5): The pre-backup script execution failed on time out.

If the pre-backup script is disabled, the status is:

  • Not enabled (4): The pre-backup script was disabled for execution.
NAS device The NAS device that hosts the NAS share
Job Type The type of job that was executed
Backup Policy The Backup Policy associated with the backup set.
End Time Time when the job ended.
Post Script Status

The status of the execution of the post-backup script.

If the post-backup script is enabled, the status can be one of the following:

  • Successful (1): The post-backup script execution succeeded.
  • Failed (2): The post-backup script execution failed.
  • Skipped (3): The post-backup script execution was skipped because the  post-backup script was unavailable at the configured location.
  • Failed with timeout (5): The post-backup script execution failed on time out.

If the post-backup script is disabled, the status is:

  • Not enabled (4): The post-backup script was disabled for execution.
Scan Type The type of scan conducted is Advanced Smart Scan or Folder Walk. 
For Isilon devices, a scan of type Vendor native recovery point (Isilon) is performed only if you have explicitly opted for it.
For more information, see Scanning methods in NAS.
Share Name The name of the NAS share which was backed up.
NAS Proxy The NAS proxy used for the backup or restore job.
Status Completion status of the job.

Data Estimation

The data estimation section has the following fields:

Field Description
Source Data Scanned The amount of data scanned at the source for the backup
#Files Removed The total number of files that were removed as compared to the previous backup.
#Files Scanned The total number of files scanned.
#Files Added The total number of files that were added as compared to the previous backup
#Files Changed The total number of files that changed as compared to the previous backup.
Time Taken for Estimation The approximate time taken to scan the data. The time taken for estimation does not include network retry time.
File Size Distribution

The percentage of the files backed up based on their sizes. The percentages are marked with the following file size ranges:

0-1KB

>1-10KB

>10-100KB

>100KB-1MB

>1-16MB

>16MB

Data Transfer

The Data Transfer section has the following details:

Field Description
Data Transferred to CloudCache The incremental data that is directly uploaded to CloudCache after deduplication and compression.
Backup Duration The total time taken to upload data to Cloud and CloudCache. Backup duration excludes the estimation time, network retry time, and the waiting for retry time.
Backup Speed The rate at which the source data is scanned for backup.
Bandwidth consumed The bit rate to transfer data to cloud and CloudCache.
Data Transferred to Cloud The incremental data is directly uploaded to Cloud after deduplication and compression.

Environment Details

The environment details section has the following details:

Field Description
Disk Read Rate The bit rate for reading I/O by the agent. The Disk Read Rate is an average of the total data read and the total time taken to read data at various intervals for all drives where data is distributed. For UNC Shares, the Disk Read Rate is the average value of drive(s).
# Network Retries The number of network retry attempts made within a job session.
Network Retry Duration The total time spent in network retries. It is a cumulative of all network sessions duration.

Logs

Log files help analyze and troubleshoot issues encountered while performing a task. You can share the logs with Druva Technical Support to aid in issue resolution.

Jobs - Logs.png

Progress Logs

Displays the progress logs of the job.

Detailed Logs

The detailed logs pane gives you the option to Request Detailed Logs. Detailed logs are available only after the upload job completes. Till then, the Request Detailed Logs button remains disabled. The logs are available for download till a certain date and time specified under the Available for download till field. The Requested on field shows you when the request for detailed logs was made.

Upload requested logs

The log files are used to analyze and troubleshoot the issues you might encounter while performing a task. For assistance in resolving the issues, you can share the log files with technical support.

Note Process logs are only available for backup and restore jobs.

Detailed logs for jobs are available for Hybrid Workloads agent version 3.4 and later and GovCloud client version 4.0 and later. The detailed logs include the following:

Common logs for Window, Linux, and backup proxy Window logs Linux logs Backup proxy logs
Druva Phoenix Config Window event/Application logs Dmesg logs VMware logs
Agent-specific Logs VSS information System information  
Main Service logs      

The procedure varies depending on the Hybrid Workloads agent version you are using. If a job is executed with Hybrid Workloads agent version 3.4 to 4.5, then the Request Server Logs option will fetch the consolidated job logs available for that server. However, if a job is executed with Hybrid Workloads agent version 4.6 and later, the Request Job Logs option will fetch logs only for that particular job. You can request job logs within 30 days of triggering the job. You must download the requested logs within 7 days of triggering the request.

NoteIf you execute a job on Hybrid Workloads agent 4.5 or earlier and then upgrade Hybrid Workloads agent to version 4.6 or later before downloading the logs, you will see the Request Job Logs option on the Detailed Logs tab of the Job Details page. However, clicking this button will still fetch the server logs for the job executed before the client upgrade.

You can request job logs within 30 days of triggering the job. You must download the requested logs within 7 days of triggering the request.If the log file is 4.5 MB or smaller, you can send it to technical support as an email attachment.  If the log file exceeds 4.5 MB in size, perform the following tasks to send the logs to support:

  1. Go to https://upload.druva.com/ 
  2. Enter the case number in the Ticket Number field.
  3. Click Choose File, and add the compressed files to upload.
  4. Click Upload. Notify the support engineer that the logs have been uploaded on the portal by responding to the ongoing support ticket email.
  • Was this article helpful?