text
llms.txt
Manifest entrypoint for assistants that need a lightweight inventory of the published documentation surface.
https://vectisconsilium.com/llms.txtProvide assistants, retrieval systems, and internal copilots with structured documentation that mirrors the actual investigation product model.
AI-access posture
Give internal assistants, retrieval pipelines, and training tools stable documents that match the actual product model.
Manifest entrypoint
/llms.txt
Structured JSON
/api/docs
Seeded knowledge lanes
11 topics
Human + model readable
Markdown
Machine-readable docs
AI-accessible documentation is only useful when it mirrors the real entities, investigations, evidence, and report objects present in the platform.
text
Manifest entrypoint for assistants that need a lightweight inventory of the published documentation surface.
https://vectisconsilium.com/llms.txtmarkdown
Full Markdown corpus for agents or internal tools that need broader context before answering workflow questions.
https://vectisconsilium.com/llms-full.txtjson
Structured documentation endpoint for programmatic retrieval, metadata filtering, and downstream automation.
https://vectisconsilium.com/api/docsjson
Topic-specific documents for focused retrieval, targeted grounding, and narrow workflow assistance.
https://vectisconsilium.com/api/docs/{topic}Object model
Topic documentation should describe the same statuses, fields, and actions the operator sees in the product.
Retrieval context
Topic docs work best when they cover both the core objects and the derived analytics layered on top of them.
Usage flow
Start from the manifest, then decide whether the workflow needs the full corpus or only the specific topic payloads.
Point your assistant to https://vectisconsilium.com/llms.txt when it needs a lightweight document inventory.
Use https://vectisconsilium.com/llms-full.txt when broader product context is required for the task.
Call https://vectisconsilium.com/api/docs/{topic} when the workflow only needs a narrow, grounded topic payload.
Refresh the source set on a schedule so assistants stay aligned with the latest published guidance.
Example
Use the manifest for discovery, then pull the full corpus or topic endpoints based on the workflow.
# Quick context
curl https://vectisconsilium.com/llms.txt
# Full corpus for model grounding
curl https://vectisconsilium.com/llms-full.txt
# Structured topic docs
curl https://vectisconsilium.com/api/docs/investigations | jq .
# Markdown export for a single topic
curl https://vectisconsilium.com/api/docs/investigations.mdKnowledge lanes
Start with the topic that matches the question. Pull only what the assistant needs when you are grounding a narrow retrieval workflow.
RAG support
Chunk boundaries are already arranged for downstream embedding pipelines.
RAG support
Filter by topic, format, and document lineage before content is retrieved.
RAG support
Use a retrieval endpoint when you need fresh topic content without copying the full corpus.
RAG support
Keep assistants aligned with the latest published guidance instead of static training snapshots.
Discuss internal copilots, retrieval workflows, or enterprise AI integration requirements with the team.