Bulk Operations

The Rekord API provides dedicated bulk endpoints for batch operations on resources. Bulk operations use partial success semantics — individual items in a batch are processed independently, so some items can succeed while others fail. These endpoints are separate from single-resource endpoints — the response format and rate limiting characteristics differ.

Bulk endpoint pattern

Bulk endpoints use the /bulk suffix:

PATCH  /records/bulk      # Batch update records
DELETE /records/bulk       # Batch delete records
POST   /contributions/bulk # Batch ingest contributions

Request format

Bulk requests accept an array of items. Each item carries its own resource identifier:

curl -X PATCH "$REKORD_BASE_URL/records/bulk" \
  -H "Authorization: Bearer $REKORD_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "items": [
      {
        "recordId": "01957c3a-...",
        "data": { "annualIncome": 55000 }
      },
      {
        "recordId": "01957c3a-...",
        "data": { "annualIncome": 62000 }
      }
    ]
  }'

Partial success

Each item in a bulk request is processed independently. Some items can succeed while others fail. The response includes per-item status:

When one or more items fail:

circle-exclamation

Limits

Limit
Value

Maximum items per bulk request

100

Maximum JSON request body size

10 MB

The 100-item limit keeps batch processing synchronous — no background job infrastructure is needed. The 10 MB body size limit accommodates large payloads (such as document extraction contributions with many fields) while preventing unbounded request bodies.

Requests exceeding either limit are rejected with 400 Bad Request.

Rate limiting

Bulk operations consume more resources than single-resource operations. A 100-record batch is rate-limited differently than a single create. Plan your usage accordingly and use the idempotency header to make retries safe.

Last updated