Skip to main content In Wherobots Cloud, Workload History allows you to monitor the progress of workloads (notebooks, SQL sessions, and job runs),
understand resource consumption, review the execution timeline, and verify configuration settings.
Start a Job Run This page discusses tracking ongoing or terminated workloads. For more information on starting a Job Run, rather than monitoring
workloads, refer to the WherobotsRunOperator Documentation.
Benefits
The Workload History section of Wherobots Cloud allows you to view and
manage all executed workloads within your Wherobots Organization, including:
Notebooks : Interactive notebook sessions
Job Runs : Automated jobs triggered via Airflow or API
SQL Sessions : Spatial SQL query executions
You can review and filter any workloads executed within your Wherobots Organization.
Before you start
Before using this feature, ensure that you have the following required resources:
An Account within a Professional or Enterprise Edition Organization. For more information, see Create a Wherobots Account .
Both Admin and User roles have access to view Workload History in Wherobots Cloud.
(Optional) A Wherobots API key.
While you don’t need an API key to view Workload History, you do need an API key to start a Job Run. For more information on initiating Job Runs, see WherobotsRunOperator .
Workload history overview
You can see an overview of all your workloads, allowing you to monitor and manage your spatial data workflows:
In Wherobots Cloud, click Workload History .
The Workload History view includes the following elements:
Time Range Filter: View workloads from the last 24 hours, 7 days, or 30 days
Owner Filter: Filter by workload owner (Service Principal or User)
Region Filter: Filter by AWS region (e.g., aws-us-west-2, aws-ap-south-1)
Type Filter: Filter by workload type (Notebooks, Job Runs, SQL Sessions)
Usage Chart: Visual representation of quota usage over time
Name: The name or ID of the workload
Owner: The entity that triggered the workload (Service Principal or User)
Duration: The total duration of the workload
Time Completed: The date and time when the workload completed, or “Running” status
Region: The AWS region where the workload executed
Runtime: The runtime size used (e.g., Tiny, Small, Medium, Large)
Cost($): The estimated cost of the workload
SUs: Spatial Units consumed by the workload so far. This information can be delayed by up to 20 minutes.
Filter workloads
To locate specific workloads, you can use multiple filters:
Navigate to Workload History .
Use the Filter by name or id field to search for specific workloads.
Apply filters for Time Range , Owner , Region , or Type as needed.
View workload details
In the Workload History view, click on any workload row to view more information
about that specific execution. The details page includes two tabs: Details and Logs .
Details tab
The Details tab provides the following information:
Workload ID: Unique identifier for this specific workload.
Start Time: Indicates when the workload was initiated.
End Time: Indicates when the workload terminated.
Duration: Indicates how much time elapsed between the Start Time and End Time .
Triggered By: The system or user that triggered the workload (e.g., Airflow, user email).
Refresh Button 🔄: Allows you to update the workload details with the latest information on-demand.
Status: Indicates if the workload is initializing (Starting), in progress (Running), finished (Completed), or unable to be executed (Failed).
Cancel Button: Allows you to stop an ongoing workload.
Logs tab
The Logs tab provides real-time access to execution logs for your workload:
Search Logs: Search through log entries using Cmd/Ctrl + F
Log Timeline: View timestamped log entries showing the execution progress
Copy Logs: Copy all log entries to clipboard
Download Logs: Download the complete log file for offline analysis
The logs include detailed information about:
File downloads and uploads
Spark submit commands
Spark version and environment details
Warnings and error messages
Application execution progress
Consumption metrics
Each workload Details page specifies the consumption metrics and details associated with a specific workload execution, including the following:
Cost($): The estimated cost of the workload execution.
Spatial Units Consumed: Indicates the accumulated Spatial Unit Consumption for this workload.
Max CPU Utilization: Indicates the accumulated CPU usage for this workload.
Max Memory Utilization: Indicates the accumulated memory used for this workload.
Each workload Details page specifies the Configuration information
associated with a specific workload execution, including the following:
General
Runtime: Indicates the size of the Runtime associated with the workload.
Timeout: Specifies the maximum duration the workload is allowed to run before being automatically stopped, in seconds. The default is 3600.
Region : Specifies the compute region where the workload is being executed.
Python File Details (for Job Runs)
URI: The location or path of the Python script being executed.
Args: The arguments passed to the Python script.
To compare workload performance with different runtime types:
Run a workload. For more information on initializing a job and runtime selection, see WherobotsRunOperator .
Run that same workload again with either a larger or smaller runtime type.
Review the Detail pages for each executed workload to compare performance metrics, specifically:
Duration
Max CPU Utilization
Max Memory Utilization
Cost($)
SUs (Spatial Units)
Cancel a workload
Wherobots cancels workloads based on your specification of timeoutseconds in the Job Runs REST API Schema, which defaults to 3600, but
you can also cancel a workload in Wherobots Cloud.
To cancel a workload within Wherobots Cloud, do the following:
In Workload History , locate the workload you wish to cancel.
Click on the workload to open its detail page.
Click the Cancel button.
Cancel running workloads You can only cancel workloads that are currently in the Running state.
Workloads in Audit Logs
If your account is configured to the User role, and you use a service principal -managed API
key generated by an Admin in your Wherobots Organization, actions are
attributed to the service principal in Audit Logs , with the Admin shown as responsible solely for the API key creation.
Limitations
Workload history data, including Spatial Unit Consumption , Max CPU Utilization , and Max Memory Utilization metrics, is only kept for 90 days.
Spatial Unit Consumption and Cost can take several minutes to display on a workload’s Detail page.
The usage chart displays quota usage as a percentage of your organization’s total computing power.