
Before you start
This is a read-only preview of this notebook. To execute the cells in this Jupyter Notebook, do the following:- Login to Wherobots Cloud.
- Start a GPU-Optimized runtime instance.
- Open a notebook. We recommend using a Tiny GPU-Optimized runtime.
- Click File > Open from Path….
- Enter the
examples/Analyzing_Data/Raster_Text_To_Segments_Airplanes.ipynbpath.
Access a GPU-Optimized runtime
This notebook requires a GPU-Optimized runtime. For more information on GPU-Optimized runtimes, see Runtime types. To access this runtime category, do the following:- Sign up for a paid Wherobots Organization Edition (Professional or Enterprise).
- Submit a Compute Request for a GPU-Optimized runtime.
Start WherobotsDB
Load Aerial Imagery Efficiently
In this step, we’ll load the aerial imagery so we can run inference in a later step. The GeoTIFF image is large, so we’ll split it into tiles and load those tiles as out-of-database or “out-db” rasters in WherobotsDB.Viewing the Model’s Imagery Inputs
We can see the footprints of the tiled images with theSedonaKepler.create_map() integration. Using SedonaUtils.display_image() we can view the images as well.
Tip: Save the map to a html file using kepler_map.save_to_html()




Run Inference and Visualize Results
To run inference, specify the model to use withmodel id. Five models are pre-loaded and made available in Wherobots Cloud to Professional and Enterprise customers. You can also load your own models, learn more about that process here.
Inference can be run using Wherobots’ Spatial SQL functions, in this case: RS_Text_to_Segments().
Here, we generate predictions for all images in the Region of Interest (ROI). In the output, a label value of 1 signifies a positive prediction corresponding to the input text prompt.
Then, we’ll filter and print some of the results to see how our positive detection results look.
Prepare Results
Before plotting our predictions, we need to transform our results. We’ll need to transform our table so that each raster scene only corresponds to a single predicted bounding box instead of every bounding box prediction. Bounding boxes (or Bboxes) are essentially boundaries drawn around an object of interest. To do this, combine the list columns containing our prediction results (max_confidence_bboxes, max_confidence_scores, and max_confidence_labels) with arrays_zip. Then, use explode to convert lists to rows.
To map the results with SedonaKepler, convert the max_confidence_bboxes column to a GeometryType column with ST_GeomFromWKT
Viewing Model Results: Airplane Segmentation Predictions
Just like we visualized the footprints of the tiled images earlier, we can also view our prediction geometries! Highlight a prediction to view its confidence score.show_detections function. This function accepts a Dataframe containing an outdb_raster column as well as other arguments to control the plot result. Check out the full docs for the function by calling show_detections?

Running Object Detection with a Text Prompt
We can also get bounding box predictions instead of segments usingRS_Text_To_BBoxes. BBoxes, or bounding boxes, are more useful when you are only concerned with counting and localizing objects rather than delineating exact shape and area with RS_Text_To_Segments.
The inference process is largely the same for RS_Text_To_BBoxes and RS_Text_To_Segments.
There are 2 key differences:
- Using the
owlv2model_idinstead ofsam2. - Changing our SQL queries to operate on the
bboxes_wktcolumn instead of thesegments_wktcolumn when working with prediction results.

Next Steps with Raster Inference
With access to general-purpose, text-promptable models, what will you predict and georeference next? Some ideas on next steps to try, include:- Predicting different objects next to the airplanes in the image tiles above using new text prompts.
- Adjusting the confidence score threshold for
RS_Text_to_SegmentsorRS_Text_to_BBoxesto see how SAM2 or OWLv2 respond. - Loading a new imagery dataset with our STAC Reader and try to predict a different feature of interest, such as agriculture, buildings, or tree crowns.

