Deployment

Deployment Guide

Evaluate where the platform runs, how the data plane is bounded, and which operating controls stay inside your environment.

Deployment lanes

Keep the operator surface consistent while you decide where compute, storage, and identity boundaries belong.

Supported deployment lanes

4 models

Core data plane

Postgres + Neo4j

Object storage compatibility

R2 / S3

Identity integration posture

SSO-ready

Architecture brief

Choose the boundary that matches your residency, accreditation, and operating model

Deployment planning should clarify where the data sits, who operates the environment, and how observability, storage, and identity controls are partitioned before implementation starts.

Deployment model

Cloud (SaaS)

Use the managed lane when rapid rollout, centralized updates, and shared cloud operations matter more than owning the full infrastructure stack.

Deployment model

On-Premise

Keep the full stack inside your controlled network when data residency, internal accreditation, or disconnected operations require it.

Deployment model

Hybrid

Split storage, identity, or AI processing across environments when the operational boundary is not the same as the compute boundary.

Deployment model

Government Cloud

Use a government-approved cloud lane when procurement or sovereign hosting requirements call for a controlled public-cloud footprint.

Operator surface

Investigation workspace

The analyst-facing workflow stays consistent while the underlying residency and hosting model changes.

System Architecture

Core stack

The platform keeps the operator workflow stable while the hosting, storage, and identity controls are mapped to the target environment.

Web Frontend

Next.js / Cloudflare Workers

API Gateway

REST, GraphQL, and event services

AI Services

Model routing and governed processing

Object Storage

Object storage with evidence retention controls

Capacity planning

System Requirements

Infrastructure sizing should reflect operator concurrency, ingestion load, evidence retention, and the models you expect to run in the environment.

Baseline

Minimum (Small Deployments)

Use this lane for pilots, low-volume teams, or controlled proof-of-value environments.

  • 8 CPU cores
  • 32 GB RAM
  • 500 GB SSD storage
  • 1 Gbps network

Operational target

Recommended (Enterprise)

Plan for this lane when multiple teams, heavier ingestion, or sustained analytics workloads are expected.

  • 32 CPU cores
  • 128 GB RAM
  • 2 TB NVMe storage
  • 10 Gbps network

Planning notes

What usually changes the size

The main drivers are document volume, evidence retention, graph depth, alerting frequency, and how much AI processing stays inside the boundary.

Telemetry

Operational analytics

Deployment planning should include the telemetry needed to watch usage, queue depth, and operational load after go-live.

Need an architecture review?

Walk through residency, storage, identity, and observability assumptions with the team before deployment design is locked.