Spatiotemporal Platform for Efficient Cloud Raster Access
Zero-copy inference · Accelerated mission detection · Cloud-Native Geospatial Intelligence
SPECTRA is built on open geospatial standards — no proprietary lock-in, no monolithic servers, no pre-tiling marathons.
A universal language for geospatial data. Every image is a STAC Item — a JSON record with geometry, datetime, and typed asset links (raw JPG, ortho thumbnail, COG). Catalogs are static JSON files served from any CDN, consumed by any compliant viewer.
Open Catalog →GeoTIFFs engineered for HTTP range requests. Internal tiling + overviews mean a client fetches only the pixels it needs — a 512×512 chip from a 4000×3000 scene costs one round-trip. No preprocessing, no mosaic server, no waiting.
Inspect a COG →A FastAPI tile server that reads COGs on the fly. Pass any COG URL and get back XYZ tiles, bbox crops, previews, or band statistics — all computed from range-reads against the source file. Zero data duplication.
Browse the API →Every drone image flows through a deterministic pipeline: raw capture → metadata extraction → three published assets → on-demand chips → CV model.
# SPECTRA CV pipeline — loop the catalog, infer on every COG import pystac, rasterio, numpy as np from ultralytics import YOLO # ── 1. load catalog (static JSON, no API server needed) ────────────────── catalog = pystac.read_file("https://drone-catalog.shadyknollcave.io/catalog.json") model = YOLO("yolo11n.pt") # ── 2. iterate items, range-read only the pixels we need ───────────────── for item in catalog.get_all_items(): cog_url = pystac.utils.make_absolute_href( item.assets["visual"].href, item.get_self_href() ) with rasterio.open(cog_url) as ds: img = ds.read([1,2,3]).transpose(1,2,0) # (H,W,3) uint8 # ── 3. run inference ───────────────────────────────────────────────── results = model(img) # ── 4. project detections back to WGS-84 ───────────────────────────── with rasterio.open(cog_url) as ds: for box in results[0].boxes.xyxy.tolist(): lon, lat = ds.xy(box[1], box[0]) # pixel → lon/lat print(f"detection at {lon:.6f}, {lat:.6f}")
These are real HTTP calls — click any card to see TiTiler fetch and render a section of the drone COG in real time, using HTTP range requests.
titiler.shadyknollcave.io/cog/preview.jpg?url=…/8d09ed3…_ortho.tif
titiler.shadyknollcave.io/cog/bbox/-77.3616,38.6152,
-77.3608,38.6156.jpg?url=…
titiler.shadyknollcave.io/cog/info?url=…/8d09ed3…_ortho.tif
The STAC → COG → TiTiler stack is not just convenient — it removes every bottleneck between a raw drone capture and a production CV pipeline.
COG range reads mean the CV pipeline never downloads a full scene. Fetch only the 512-px chip that covers your detection ROI. A 4000×3000 COG streams as 16 KB of headers, not 16 MB of pixels.
Every pixel has a coordinate. ds.xy(row, col) maps YOLO bounding boxes back to lon/lat — output is GeoJSON, not pixel rectangles. Detections land directly on a map.
Loop catalog.get_all_items() to process a full mission in one script. STAC metadata — GSD, altitude, gimbal pitch — flows into model confidence scoring and false-positive filtering.
facebook/detr-resnet-50 running on COG chips fetched via TiTiler HTTP range requests — no full-scene download, no preprocessing pipeline.
Each image is a 512-px chip fetched via a single TiTiler
/cog/preview call — no full GeoTIFF download.
The model is facebook/detr-resnet-50
(COCO, Apache-2.0), loaded from Hugging Face with transformers.
Bounding boxes are drawn with pixel coordinates from the orthorectified COG —
map them back to lon/lat with ds.xy(row, col).