Core Concepts
The Rekord API is built around six core resources, all scoped to a workspace — an isolated environment containing your organisation's data. Every API token operates within a single workspace.
Understanding how these resources relate to each other is essential before working with any endpoint.
Models
A model defines the shape of a type of data — the fields, their types, validation rules, resolution policies, and relationships to other models. Think of a model as a schema: "Applicant" might have fields for name, date of birth, and annual income; "Property" might have fields for address, purchase price, and estimated value.
Models progress through a lifecycle:
Draft — the model is being configured. No records can be created against it.
Active — the model is live. Records can be created, read, updated, and deleted.
Archived — the model is frozen. Existing records remain accessible, but no new records can be created and the schema cannot be modified.
Records
A record is the materialised view of a single entity — an individual applicant, a specific property, a particular employment history. It contains the current best answer for every field, assembled from all the evidence the system has received.
Records are not written directly. Instead, evidence arrives as contributions, and the platform's resolution pipeline assembles the record by reconciling all contributions according to the model's configured policies. When you update a record through the API, the platform creates a contribution behind the scenes and re-resolves the record.
Every record carries a freshness indicator:
resolved— the record reflects all available evidencestale— new evidence has arrived but has not yet been processedcomputing— the platform is currently re-resolving the record
Contributions
A contribution is an immutable piece of evidence. Every data point that enters the system — a document extraction, a third-party integration response, a manual entry, an API write — is persisted as a contribution with the raw payload, per-field confidence scores, and source metadata.
Contributions are never modified after creation. If a source sends corrected data, it arrives as a new contribution that supersedes the previous one. The original is preserved for audit.
The separation between contributions (immutable evidence) and records (mutable interpretation) is fundamental. Correcting a mistake means changing how the system interprets the evidence, not editing the evidence itself.
Relationships
Relationships connect records across models, forming a traversable graph. An "Applicant" might be linked to "Employment" records and a "Property" record. These connections power cross-record aggregations (e.g., summing salary across employment records) and rule evaluations that span multiple entities.
Relationships are established by the platform's resolution pipeline — they are not created directly through the API. You can read and remove relationship links through the API.
Decisioning Contexts
A decisioning context is the boundary around a single decision — a mortgage application, a loan assessment, an onboarding journey. It contains exactly the records, contributions, relationships, and rule evaluations relevant to that decision. Data within one context cannot leak into another.
Contexts carry two independent status dimensions:
status— the business lifecycle:active,completed, orarchived. You control this explicitly.freshness— the pipeline state:resolved,stale, orcomputing. The platform manages this automatically.
This separation means a case can be marked "completed" while late-arriving evidence is still being processed — the business decision is recorded, and the data continues to converge independently.
Rules
Rules are workspace-scoped expressions written in JSON Logic that evaluate against the full decisioning context. They answer questions like "does this applicant pass the affordability check?" or "is the loan-to-value ratio within policy?".
Each rule carries a severity level:
hard— a blocking condition. The decision cannot proceed if this rule fails.soft— a warning. The rule flags an issue but does not block.info— logged for informational purposes only.
Rules have their own lifecycle (draft, active, archived) and are versioned independently. When a rule evaluates, the platform captures a full trace — the expression, the input values, and the logical steps — providing a complete audit trail for regulatory review.
How they fit together
A typical flow:
You define a model to describe your data
You create a decisioning context to scope a decision
Evidence arrives as contributions — from integrations, document processing, or direct API writes
The platform resolves contributions into records, establishes relationships, and evaluates rules
You read the resolved records and rule evaluation outcomes to inform your decision
Last updated

