Skip to main content
Private Preview

Wherobots RasterFlow is a powerful inference engine for large-scale raster processing and inferencing. It enables you to build mosaics from multiple raster data sources, run inference with computer vision models, and vectorize results—all with a simple, high-level API.

Why choose RasterFlow?

RasterFlow provides a managed workflow execution environment for geospatial raster processing tasks. It abstracts away the complexity of raster pipelines and distributed computing, allowing you to focus on your analysis rather than data engineering and infrastructure management. Key capabilities include:
  • Planetary-scale processing: Process raster data at massive scale with optimized chunking, sharding, and parallel processing
  • Simple, high-level API: Abstract away the complexity with pre-configured datasets and models—or bring your own
  • Build mosaics: Combine multiple raster datasets into unified, analysis-ready mosaics
  • Run model inference: Apply machine learning models to massive raster datasets at scale
  • Vectorize results: Convert raster predictions into vector geometries for spatial analysis
  • Custom workflows: Build flexible pipelines with your own data, models, and processing parameters
  • Standard format support: Work with GeoTIFF, Zarr, and GeoParquet formats

Key concepts

The following concepts are fundamental to understanding how RasterFlow works:

Mosaics

Mosaics are spatially-aligned raster datasets stored in Zarr format. RasterFlow can build mosaics from aerial and satellite imagery to prepare them for model inference. This process can combine one or more datasets across an Area of Interest (AOI) and a temporal dimension to create a single seamless input mosaic for inference. RasterFlow has workflows for building mosaics from the built-in datasets or your own imagery.

Model inference

Run computer vision models on raster data at scale:
  • Semantic Segmentation: Classify each pixel (e.g., land cover mapping)
  • Regression: Predict continuous values (e.g., canopy height estimation in meters)
  • Patch-based Processing: Handle large mosaics by dividing into manageable patches

Vectorization

Convert raster predictions to vector geometries:
  • Threshold-based: Binarize continuous predictions
  • Polygonization: Create polygon features from classified pixels
  • Coordinate transformation: Reproject to desired CRS (e.g., WGS84)

Get Started

To get started, login to Wherobots Cloud and try out one of the built-in models by running a notebook in the Model Hub. The examples below demonstrate real-world applications that you can adapt to your own data and use cases. Each notebook provides a complete, working implementation to help you get started quickly.
Use CaseCapabilitiesExample ApplicationNotebook
Agricultural Field Mapping- Detect field boundaries from Sentinel-2 imagery
- Segment crop fields across counties/regions
- Convert raster predictions to vector geometries
Map all agricultural fields in Haskell County, Kansas using Sentinel-2 imagery and the Fields of the World modelTry it
Urban Infrastructure Detection- Identify sidewalks, crosswalks, and pedestrian pathways
- Generate detailed maps for urban planning
- Analyze accessibility from high-resolution aerial imagery
Detect and map sidewalk networks in College Park, Maryland using 30cm NAIP imagery with the Tile2Net modelTry it
Canopy Height Estimation- Predict tree canopy heights from aerial imagery
- Monitor forest health and vegetation structure
- Support conservation and urban forestry initiatives
Estimate tree heights across Nashua, NH using 60cm NAIP imagery with the Meta CHM v1 modelTry it
Rural Road Detection- Identify roads, especially in rural environments
- Map road networks to support routing and navigation
- Detect road network changes to keep maps up to date
Detect roads in Maryland using 1m NAIP imagery with the ChesapeakeRSC modelTry it

Next steps

  • Try out the pre-configured model solutions notebooks in the Model Hub in Wherobots Cloud
  • Explore the Client API Reference to learn about all available methods
  • Review Data Models to understand configuration options

API reference

For detailed API documentation, see: