AI Access

AI-Accessible Documentation

Provide assistants, retrieval systems, and internal copilots with structured documentation that mirrors the actual investigation product model.

AI-access posture

Give internal assistants, retrieval pipelines, and training tools stable documents that match the actual product model.

Manifest entrypoint

/llms.txt

Structured JSON

/api/docs

Seeded knowledge lanes

11 topics

Human + model readable

Markdown

Machine-readable docs

Expose the same objects analysts see so assistants can answer workflow questions without hallucinating the product model

AI-accessible documentation is only useful when it mirrors the real entities, investigations, evidence, and report objects present in the platform.

text

llms.txt

Manifest entrypoint for assistants that need a lightweight inventory of the published documentation surface.

https://vectisconsilium.com/llms.txt

markdown

llms-full.txt

Full Markdown corpus for agents or internal tools that need broader context before answering workflow questions.

https://vectisconsilium.com/llms-full.txt

json

JSON API

Structured documentation endpoint for programmatic retrieval, metadata filtering, and downstream automation.

https://vectisconsilium.com/api/docs

json

Topic API

Topic-specific documents for focused retrieval, targeted grounding, and narrow workflow assistance.

https://vectisconsilium.com/api/docs/{topic}

Object model

Investigation overview

Topic documentation should describe the same statuses, fields, and actions the operator sees in the product.

Retrieval context

Analytics and derived outputs

Topic docs work best when they cover both the core objects and the derived analytics layered on top of them.

Usage flow

How to Use with Your AI

Start from the manifest, then decide whether the workflow needs the full corpus or only the specific topic payloads.

  1. 1

    Start with the manifest

    Point your assistant to https://vectisconsilium.com/llms.txt when it needs a lightweight document inventory.

  2. 2

    Pull the full corpus

    Use https://vectisconsilium.com/llms-full.txt when broader product context is required for the task.

  3. 3

    Call topic endpoints for targeted retrieval

    Call https://vectisconsilium.com/api/docs/{topic} when the workflow only needs a narrow, grounded topic payload.

  4. 4

    Refresh the source set on a schedule

    Refresh the source set on a schedule so assistants stay aligned with the latest published guidance.

Example

Fetch the docs from a model toolchain

Use the manifest for discovery, then pull the full corpus or topic endpoints based on the workflow.

# Quick context
curl https://vectisconsilium.com/llms.txt

# Full corpus for model grounding
curl https://vectisconsilium.com/llms-full.txt

# Structured topic docs
curl https://vectisconsilium.com/api/docs/investigations | jq .

# Markdown export for a single topic
curl https://vectisconsilium.com/api/docs/investigations.md

Knowledge lanes

Available Topics

Start with the topic that matches the question. Pull only what the assistant needs when you are grounding a narrow retrieval workflow.

RAG support

Pre-chunked content for embedding

Chunk boundaries are already arranged for downstream embedding pipelines.

RAG support

Rich metadata for filtering

Filter by topic, format, and document lineage before content is retrieved.

RAG support

Search endpoint for retrieval

Use a retrieval endpoint when you need fresh topic content without copying the full corpus.

RAG support

Version tracking for freshness

Keep assistants aligned with the latest published guidance instead of static training snapshots.

Need Custom Integration?

Discuss internal copilots, retrieval workflows, or enterprise AI integration requirements with the team.