ACTIVE · shadyknollcave.io

SPECTRA

Spatiotemporal Platform for Efficient Cloud Raster Access

Zero-copy inference  ·  Accelerated mission detection  ·  Cloud-Native Geospatial Intelligence

scroll

Three primitives.
Infinite scale.

SPECTRA is built on open geospatial standards — no proprietary lock-in, no monolithic servers, no pre-tiling marathons.

🗂
stac 1.0

SpatioTemporal Asset Catalog

A universal language for geospatial data. Every image is a STAC Item — a JSON record with geometry, datetime, and typed asset links (raw JPG, ortho thumbnail, COG). Catalogs are static JSON files served from any CDN, consumed by any compliant viewer.

Open Catalog →
🌐
COG · GeoTIFF

Cloud Optimized GeoTIFF

GeoTIFFs engineered for HTTP range requests. Internal tiling + overviews mean a client fetches only the pixels it needs — a 512×512 chip from a 4000×3000 scene costs one round-trip. No preprocessing, no mosaic server, no waiting.

Inspect a COG →
TiTiler · FastAPI

Dynamic Tile Server

A FastAPI tile server that reads COGs on the fly. Pass any COG URL and get back XYZ tiles, bbox crops, previews, or band statistics — all computed from range-reads against the source file. Zero data duplication.

Browse the API →

From sensor to inference.

Every drone image flows through a deterministic pipeline: raw capture → metadata extraction → three published assets → on-demand chips → CV model.

🚁
Drone JPEG
raw · EXIF+XMP
📋
STAC Item
geo · datetime · assets
🗺
COG Assets
ortho · 4000×3000px
TiTiler
chip on demand
🤖
CV / YOLO
geo-registered output
inference_pipeline.py
# SPECTRA CV pipeline — loop the catalog, infer on every COG
import pystac, rasterio, numpy as np
from ultralytics import YOLO

# ── 1. load catalog (static JSON, no API server needed) ──────────────────
catalog = pystac.read_file("https://drone-catalog.shadyknollcave.io/catalog.json")
model   = YOLO("yolo11n.pt")

# ── 2. iterate items, range-read only the pixels we need ─────────────────
for item in catalog.get_all_items():
    cog_url = pystac.utils.make_absolute_href(
        item.assets["visual"].href,
        item.get_self_href()
    )
    with rasterio.open(cog_url) as ds:
        img = ds.read([1,2,3]).transpose(1,2,0)  # (H,W,3) uint8

    # ── 3. run inference ─────────────────────────────────────────────────
    results = model(img)

    # ── 4. project detections back to WGS-84 ─────────────────────────────
    with rasterio.open(cog_url) as ds:
        for box in results[0].boxes.xyxy.tolist():
            lon, lat = ds.xy(box[1], box[0])  # pixel → lon/lat
            print(f"detection at {lon:.6f}, {lat:.6f}")

Every URL is a live query.

These are real HTTP calls — click any card to see TiTiler fetch and render a section of the drone COG in real time, using HTTP range requests.

COG Preview
PREVIEW

Full-scene overview

titiler.shadyknollcave.io/cog/preview.jpg?url=…/8d09ed3…_ortho.tif
Open live →
BBox Crop
BBOX CROP

Geographic chip extraction

titiler.shadyknollcave.io/cog/bbox/-77.3616,38.6152,
-77.3608,38.6156
.jpg?url=…
Open live →
JSON response
COG INFO

Bounds, bands & overviews

titiler.shadyknollcave.io/cog/info?url=…/8d09ed3…_ortho.tif
Open live →
49
STAC Items
3
Assets per Item
4K
COG Resolution
1
HTTP Round-trip per Chip

Inference at the speed
of the network.

The STAC → COG → TiTiler stack is not just convenient — it removes every bottleneck between a raw drone capture and a production CV pipeline.

01

Zero-copy inference

COG range reads mean the CV pipeline never downloads a full scene. Fetch only the 512-px chip that covers your detection ROI. A 4000×3000 COG streams as 16 KB of headers, not 16 MB of pixels.

02

Geo-registered detections

Every pixel has a coordinate. ds.xy(row, col) maps YOLO bounding boxes back to lon/lat — output is GeoJSON, not pixel rectangles. Detections land directly on a map.

03

Catalog-driven pipelines

Loop catalog.get_all_items() to process a full mission in one script. STAC metadata — GSD, altitude, gimbal pitch — flows into model confidence scoring and false-positive filtering.

Real detections.
Real drone data.

facebook/detr-resnet-50 running on COG chips fetched via TiTiler HTTP range requests — no full-scene download, no preprocessing pipeline.

Drone imagery with car and truck detections
ITEM · 8d09ed33… · DJI_20260329150314_0001_D
25 objects detected
car ×21
0.59–0.98
truck ×4
0.51–0.73
25 detections DETR ResNet-50
View STAC item →
Drone imagery with car detections
ITEM · d469b9f5…
7 objects detected
car ×7
0.61–0.99
7 detections DETR ResNet-50
View STAC item →
Drone imagery with car detections
ITEM · 4655598c…
6 objects detected
car ×6
0.70–0.96
6 detections DETR ResNet-50
View STAC item →
Drone imagery with car detections
ITEM · f6b6c520…
6 objects detected
car ×6
0.67–0.90
6 detections DETR ResNet-50
View STAC item →

Each image is a 512-px chip fetched via a single TiTiler /cog/preview call — no full GeoTIFF download. The model is facebook/detr-resnet-50 (COCO, Apache-2.0), loaded from Hugging Face with transformers. Bounding boxes are drawn with pixel coordinates from the orthorectified COG — map them back to lon/lat with ds.xy(row, col).