What the Knowledge service does
- Live data ingestion via
/api/Knowledge/livedatakeeps resident, listing, or asset records synced field-by-field. - Document ingestion turns PDFs and other uploads into searchable knowledge that surfaces in Recallio answers.
- Environment separation lets you map data to projects (for example
Home A,Home B, orStaging) so answers stay scoped to the right audience.
Before you call any endpoint
- Create or locate an API key in the dashboard under API Keys. The generated value is the bearer token you send in the
Authorizationheader. - Create projects for every environment in Settings → Projects. Each project exposes a
ProjectIdthat you include with every payload to keep knowledge organized. - Confirm consent requirements with your team. Each data submission should respect the
ConsentFlagfield when applicable.
Live data vs. documents
| Use case | Endpoint | Required fields |
|---|---|---|
| Structured operational data that changes frequently | POST https://app.recallio.ai/api/Knowledge/livedata | type, rawJson, optional userId, recommended projectId |
| Static or semi-structured reference material (PDF, policy guides, manuals) | POST https://app.recallio.ai/api/Knowledge/document | File, ProjectId, optional Tags, SourceUrl, ConsentFlag |
| Retrieve processed documents and metadata | GET https://app.recallio.ai/api/Knowledge/document | Optional query parameters to filter by ProjectId, paging |
- Set the
typefield to a domain-specific label ("listing","resident","maintenance-ticket", etc.) so Recallio can group similar payloads. - Always include the
projectIdwhenever the content belongs to a specific environment; omit it only for global knowledge.
Where to go next
Knowledge Quickstart
End-to-end walkthrough: generate an API key, create a project, and make your first
/api/Knowledge/livedata request.Implementation Deep Dive
Payload schemas, validation tips, and how to automate document uploads via API or the dashboard.

