KI-fähig

KI-Dokumentationszugang

Ermöglichen Sie KI-Assistenten wie ChatGPT, Claude oder Ihrem bevorzugten LLM den Abruf unserer Dokumentation, um Ihnen bei Public Safety-Aufgaben zu helfen.

AI-access posture

Give internal assistants, retrieval pipelines, and training tools stable documents that match the actual product model.

Manifest entrypoint

/llms.txt

Structured JSON

/api/docs

Seeded knowledge lanes

11 topics

Human + model readable

Markdown

Machine-readable docs

Expose the same objects analysts see so assistants can answer workflow questions without hallucinating the product model

AI-accessible documentation is only useful when it mirrors the real entities, investigations, evidence, and report objects present in the platform.

text

llms.txt

Schnellreferenz-Guide für KI-Assistenten mit Links zu detaillierter Dokumentation.

https://vectisconsilium.com/llms.txt

markdown

llms-full.txt

Complete documentation in Markdown format for full context

https://vectisconsilium.com/llms-full.txt

json

JSON API

Structured documentation with metadata for programmatic access

https://vectisconsilium.com/api/docs

json

Topic API

Individual topics with detailed sections and examples

https://vectisconsilium.com/api/docs/{topic}

Object model

Investigation overview

Topic documentation should describe the same statuses, fields, and actions the operator sees in the product.

Retrieval context

Analytics and derived outputs

Topic docs work best when they cover both the core objects and the derived analytics layered on top of them.

Usage flow

How to Use with Your AI

You can use these endpoints to train your AI assistant on Public Safety:

  1. 1

    Start with the manifest

    Point your AI to https://vectisconsilium.com/llms.txt for quick context

  2. 2

    Pull the full corpus

    For full documentation, use https://vectisconsilium.com/llms-full.txt

  3. 3

    Call topic endpoints for targeted retrieval

    For specific topics, use https://vectisconsilium.com/api/docs/{topic}

  4. 4

    Refresh the source set on a schedule

    Your AI can fetch updates automatically to stay current

Example

Fetch the docs from a model toolchain

Use the manifest for discovery, then pull the full corpus or topic endpoints based on the workflow.

# Quick context
curl https://vectisconsilium.com/llms.txt

# Full corpus for model grounding
curl https://vectisconsilium.com/llms-full.txt

# Structured topic docs
curl https://vectisconsilium.com/api/docs/investigations | jq .

# Markdown export for a single topic
curl https://vectisconsilium.com/api/docs/investigations.md

Knowledge lanes

Verfügbare Themen

Start with the topic that matches the question. Pull only what the assistant needs when you are grounding a narrow retrieval workflow.

RAG support

Pre-chunked content for embedding

Chunk boundaries are already arranged for downstream embedding pipelines.

RAG support

Rich metadata for filtering

Filter by topic, format, and document lineage before content is retrieved.

RAG support

Search endpoint for retrieval

Use a retrieval endpoint when you need fresh topic content without copying the full corpus.

RAG support

Version tracking for freshness

Keep assistants aligned with the latest published guidance instead of static training snapshots.

Lieber auf herkömmliche Weise erkunden?

Durchsuchen Sie unsere vollständige Dokumentation direkt oder kontaktieren Sie unser Support-Team.