Bulk Jobs
Run high-volume imports through Geobridge and stream results as soon as matches are ready. The bulk API pairs asynchronous processing with NDJSON delivery so you can fan out work without waiting for archives to download.
Endpoints at a glance
Section titled “Endpoints at a glance”Streaming NDJSON results
Section titled “Streaming NDJSON results”Set format=ndjson and request Accept: application/x-ndjson to receive a live feed of records. Each line is a standalone JSON document that maps the original id to the matched features.
curl -s -N 'https://api-na.geobridge.io/v1/bulk/jobs/{job_id}/results?format=ndjson' \ -H 'Accept: application/x-ndjson' \ -H "X-API-Key: ${GEOBRIDGE_API_KEY}" \ | tee results.ndjsonEach newline is a valid JSON document. Pipe it into jq, queue workers, or downstream storage without waiting for the full job to complete.
const apiKey = process.env.GEOBRIDGE_API_KEY;const jobId = process.env.GEOBRIDGE_JOB_ID;const url = new URL(`/v1/bulk/jobs/${jobId}/results`, 'https://api-na.geobridge.io');url.searchParams.set('format', 'ndjson');
const response = await fetch(url, { headers: { 'Accept': 'application/x-ndjson', 'X-API-Key': apiKey, }, signal: AbortSignal.timeout(30000),});
if (!response.ok || !response.body) { throw new Error(`Bulk results failed: ${response.status} ${response.statusText}`);}
const decoder = new TextDecoder();let buffer = '';
for await (const chunk of response.body) { buffer += decoder.decode(chunk, { stream: true }); const lastBreak = buffer.lastIndexOf('\n'); if (lastBreak === -1) continue; const ready = buffer.slice(0, lastBreak).split('\n'); buffer = buffer.slice(lastBreak + 1);
for (const line of ready) { if (!line) continue; const doc = JSON.parse(line); console.log(doc.id, doc.matches?.[0]?.properties?.name); }}import jsonimport osimport requests
api_key = os.environ["GEOBRIDGE_API_KEY"]job_id = os.environ["GEOBRIDGE_JOB_ID"]
resp = requests.get( f"https://api-na.geobridge.io/v1/bulk/jobs/{job_id}/results", headers={ "Accept": "application/x-ndjson", "X-API-Key": api_key, }, params={"format": "ndjson"}, stream=True, timeout=30,)resp.raise_for_status()
for line in resp.iter_lines(): if not line: continue doc = json.loads(line) print(doc["id"], len(doc.get("matches", [])))Response schema
Section titled “Response schema”GET /bulk/jobs/{job_id}/results returns a BulkJobResults document:
jobechoes the metadata fromGET /bulk/jobs/{job_id}so you can reconcile status without an extra request.formatreflects the payload you requested (geojson,ndjson, orcsv).resultscontains one of:- a GeoJSON feature collection (inline when
format=geojsonand the payload is small), - a presigned URL for large archives, or
- a stream of newline-delimited strings for NDJSON.
- a GeoJSON feature collection (inline when
Refer to #/components/schemas/BulkJobResults for exact field definitions.
Webhooks and retries
Section titled “Webhooks and retries”Pair streaming with a webhook to learn when Geobridge finishes processing. Configure the webhook when you create the job:
{ "url": "https://example.org/bulk/callback", "secret": "${GEOBRIDGE_WEBHOOK_SECRET}", "retry_policy": { "max_attempts": 5, "strategy": "exponential" }}See the Bulk job completion callback for the signing algorithm and replay mitigation guidance.