Skip to content

Raster Inference Overview

WherobotsAI Raster Inference simplifies the analysis of large-scale satellite imagery. It allows you to use your own models or choose from hosted options for classification, object detection, and segmentation.

Capabilities

WherobotsAI Raster Inference currently supports 3 computer vision tasks:

WherobotsAI Raster Inference requires submitting a compute request

WherobotsAI Raster Inference requires a GPU-Optimized runtime.

To access this runtime category, do the following:

  1. Sign up for a paid Wherobots Organization Edition (Professional or Enterprise).
  2. Submit a Compute Request for a GPU-Optimized runtime.

Classification

Raster Inference: Classifying each pixel in an image into predefined categories (e.g., land cover types, object types).

Model summary: Image classification is the task of analyzing an image and assigning it a label from a set of predefined categories. It involves identifying patterns and features within the image to accurately determine its content, serving as a foundation for more advanced computer vision applications.

Example uses: Analyzing satellite imagery to classify each pixel as "forest", "water", "urban", or "agriculture".

Classification Notebook

Object Detection

Raster Inference: Locating and identifying specific objects within an image (e.g., buildings, vehicles, trees).

Model summary: Object detection is the task of identifying and locating specific objects within an image by classifying them into predefined categories and drawing bounding boxes around their positions. It combines image classification and localization, enabling systems to recognize multiple objects and their spatial relationships within a single image.

Example uses: Detecting cars in aerial imagery for traffic analysis or identifying damaged buildings after a natural disaster.

Object Detection Notebook

Segmentation

Raster Inference: Dividing an image into meaningful segments or regions based on shared characteristics (e.g., grouping pixels that belong to the same object or land cover type).

Model summary: Semantic segmentation is the process of dividing an image into regions and assigning each pixel a label corresponding to a predefined category. Unlike object detection, it focuses on pixel-level classification, providing a detailed understanding of the image by identifying and labeling every part of it.

Example uses: Segmenting a satellite image to delineate individual solar panels or agricultural fields.

Segmentation Notebook

Wherobots example Notebooks

Below are the SQL and Python API functions for each computer vision task.

Learn more by exploring our Python or SQL API Reference documentation and tutorials.

Task Python API Documentation SQL API Documentation Tutorial Notebook
Classification create_single_label_classification_udfs() RS_CLASSIFY() Classification Tutorial
Object Detection create_object_detection_udfs() RS_DETECT_BBOXES() Object Detection tutorial
Semantic Segmentation create_semantic_segmentation_udfs() RS_SEGMENT() Segmentation Tutorial

Wherobots-hosted models guide

Wherobots-hosted models are compiled and optimized for raster inference, enabling scalability and efficient processing of large datasets.

To use these Wherobots-hosted models, set the model_id variable to the model name within the Raster Inference function. For more information, see the tutorial associated with each model.

The following table details each models, name, architecture, and associated task.

Task Model Name Model Architecture Dataset used for training Performance Accuracy Wherobots MLM STAC Card Notebook Tutorial
Image Classification

Categorize land cover into:
  • Annual Crop
  • Forest
  • Herbaceous Vegetation
  • Highway
  • Industrial Buildings
  • Pasture
  • Permanent Crop
  • Residential Buildings
  • River
  • SeaLake
landcover-eurosat-sentinel2 ResNet-18 EuroStat Sentinel-2 27,000 Eurosat 13 band rasters See TorchGeo Benchmarks MLM-Stac card page Classification Tutorial
Object Detection

Identify marine infrastructure (offshore wind farms and platforms) in satellite imagery.
marine-satlas-sentinel2 Swin Transformer V2 with R-CNN head SATLAS 5,000 Sentinel-2 time series (4x13x1024x1024) See Allen AI Benchmarks MLM-Stac card page Object Detection Tutorial
Semantic Segmentation

Identify solar farms in satellite imagery.

solar-satlas-sentinel2 Swin Transformer V2 with U-Net head SATLAS 5,000 Sentinel-2 time series (3x3x1024x1024) See Allen AI Benchmarks MLM-Stac card page Segmentation Tutorial

Bring your own model guide

WherobotsAI Raster Inference supports bringing your own Torchscript model through the Machine Learning Model Extension Specification (MLM).

We’ve made it really easy to port your own model into the platform and use Raster Inference for batch inference processing.

For a detailed guide on porting your Torchscript model, see Bring Your Own Model in the WherobotsAI Documentation.

Raster Inference Job Run guide

Streamline, automate, and manage complex ETL workloads with Airflow.

Integrate Wherobots' geospatial capabilities using the WherobotsRunOperator to orchestrate Job Runs and execute Raster Inference within your DAG environment for efficient, non-interactive processing of vector and raster data.

For more information on creating raster inference job runs, see the Raster inference section in the WherobotsRunOperator Documentation.