Obsidian Grid

Private, GPU-Accelerated AI Infrastructure
for Real-World Systems.

Rootstar builds local-first AI systems designed for continuous operation, low-latency inference, and full data control — powered by NVIDIA-class hardware.

From persistent agent runtimes to voice-driven environments, our systems are built to operate where reliability, privacy, and control matter most.

GPU-accelerated inference. Local deployment. Full operator control.
Designed to generate sustained, continuous GPU workloads across real-world environments.
NVIDIA GPU hardware Continuous inference workloads Real-world deployment
01 — Platform

Inside Obsidian Grid

Four modules running on local GPU hardware. Velithra Core handles inference; OpenClaw runs orchestration; Aegis provides system control. Nyx Interface is an optional control surface — the pipeline operates without it.

Velithra Core
Live
Inference + Secure Routing
What it does
GPU-accelerated local LLM inference with continuous batching, API bridge, persona enforcement, and secure request routing. Runs on NVIDIA hardware — no cloud calls for core inference.
Input
Prompt requests via local API endpoint or client application.
Output
Model completions routed through FastAPI bridge with enforced system persona.
Deployment
Local. Runs on DGX Spark and RTX workstations. NVIDIA GPU hardware required for primary inference workloads.
  • vLLM inference with continuous batching
  • FastAPI bridge + Nginx gateway
  • NAS-backed model storage
  • Multi-client session isolation
  • Sustained multi-client GPU inference workloads
OpenClaw Engine
In Progress
Automation + Orchestration
What it does
Persistent agent runtime that connects inference outputs to automated workflows, multi-step task execution, and API integrations. Maintains state across long-running operations.
Input
Structured outputs from Velithra, webhooks, scheduled triggers.
Output
Executed actions: API calls, script runs, data transforms, notifications.
Deployment
Local with optional external API connectivity for integrations.
  • Multi-step task orchestration
  • REST + webhook integrations
  • Policy-guarded execution
  • Agent-driven workflow chains
Aegis Layer
Planned
Monitoring + Control
What it does
System-wide monitoring with GPU utilization tracking, inference throughput metrics, and visibility into sustained workload performance across deployments. All telemetry stays local.
Input
System metrics, inference logs, workflow results from Velithra and OpenClaw.
Output
Dashboards, health alerts, automated reports, business intelligence summaries.
Deployment
Local. Reads from local telemetry; no external data transmission required.
  • GPU + inference health monitoring
  • Automated alerting + reporting
  • Anomaly detection
  • Decision support analytics
Nyx Interface
Optional
Avatar + Voice Control Surface
What it does
Provides a persona-driven conversational interface — avatar, voice, and text — over the Obsidian Grid system.
Input
Voice commands, text input, UI interactions from the operator.
Output
Conversational responses, system commands routed to OpenClaw, status queries to Aegis.
Deployment
Client-side. Connects to Velithra bridge. Swappable — the system runs with or without it.
  • Persona-enforced avatar assistant
  • Voice interface (planned)
  • Dashboard control surface
  • Fully optional — infrastructure runs independently
02 — Architecture

System flow

Linear pipeline from hardware to operator output. Nyx Interface sits alongside — not in the critical path.

Obsidian Grid — Logical Flow
03 — Positioning

What this is

This is
  • GPU-accelerated local inference infrastructure
  • Persistent agent runtime with memory and orchestration
  • Real-world deployment on NVIDIA hardware
  • Systems built for continuous, always-on operation
This is not
  • A generic chatbot or API wrapper
  • A cloud-only or batch-processing system
  • A one-off consulting engagement
  • A demo without deployment path
04 — What We Deliver

What Rootstar Delivers

Deployable systems — not concepts. Each component is production-ready or actively in build with measurable completion criteria.

01
Local AI Inference Stack

GPU-accelerated inference running on NVIDIA hardware. vLLM with continuous batching, FastAPI bridge, and NAS-backed model storage. No cloud dependency for core workloads.

02
Persistent Agent Runtime

Agent execution layer with memory, task orchestration, and environment awareness. Designed for always-on operation — not single-turn request handling.

03
Deployment Templates for Real-World Environments

Validated deployment patterns for home, edge, and hub configurations. Hardware-specific tuning included. Tested against Luma CareOS and simulation workloads.

04
Monitoring and Control Layer

System-level visibility across GPU utilization, inference throughput, and workflow state. Automated alerting, anomaly detection, and lifecycle management.

05
Early Access to Live System Builds

Qualified deployment partners and pilot users gain access to working system builds before general availability. Limited onboarding now open.

05 — Roadmap

Next 90 days

Focused on deploying real-world, continuous-use AI systems across controlled environments. Four milestones — each with a measurable completion criterion.

Now
10GbE backbone + multi-domain migration
Complete physical network upgrade and migrate all Rootstar web properties to consolidated infrastructure.
Criterion: All five domains serving from new stack with sub-100ms internal latency.
Now
Velithra multi-client serving
Extend the Velithra bridge to handle concurrent client connections with per-client context isolation.
Criterion: Three or more simultaneous clients served with independent sessions, no cross-contamination.
Next
OpenClaw orchestration engine v1
First working version of the automation layer — triggering actions from inference outputs with policy guards.
Criterion: One end-to-end automated workflow running in production: trigger → infer → act → log.
Later
Aegis monitoring dashboard v1
Initial telemetry collection from Velithra and system health visualization.
Criterion: Live dashboard displaying GPU utilization, inference throughput, and error rates.
06 — Secondary Product

Velithra Desktop

Local Operator Console for Obsidian Grid

Velithra Desktop is the control surface for private AI infrastructure. It gives operators a single interface to monitor nodes, route tasks, manage memory workflows, and enforce safety policies across a local-first deployment.

01
Node Health Dashboard

Live status for inference nodes, memory services, and automation workers.

02
Task Routing Console

Launch and route predefined workflows across available compute lanes.

03
Memory + Context Panel

Search and inspect persistent context, recent actions, and retrieval traces.

04
Operations Log Stream

Unified event and error stream with retry/escalation controls.

05
Safety Controls

Policy modes, execution guardrails, and emergency stop controls.

07 — Applied Programs

Proof of Capability

These programs demonstrate how our infrastructure performs in live, multi-variable environments across simulation, decision support, and operator workflows.

BBM Front Office Manager — prototype decision-system interface.
Prototype
BBM Front Office Manager (FOM)

Applied decision-system prototype for sports operations, including trait engines, budget mechanics, briefing workflows, and event-driven management logic. Designed to test orchestration reliability and human-in-the-loop control under dynamic conditions.

BBM Front Office Manager — prototype decision-system interface.
Stellar Veil — narrative simulation environment (concept frame).
Prototype
Stellar Veil

Narrative simulation environment used to test autonomous agent interaction, world-state orchestration, and control-surface behavior in high-context scenarios.

Stellar Veil — narrative simulation environment (concept frame).
08 — Flagship Deployment

Luma CareOS — A Real-World Deployment

Built on persistent local inference systems, Luma operates as a continuous AI workload rather than a request-response tool.

Luma CareOS is a continuous, voice-driven AI system built on Rootstar infrastructure, designed for real-world behavioral guidance and caregiver support. It operates as a persistent companion layer, delivering daily guidance, reminders, and interaction through local AI systems.

Designed to feel calm, supportive, and non-intrusive in daily use.

  • Persistent local AI runtime
  • Voice-driven interaction layer
  • Human override and escalation controls
  • Privacy-first deployment options
Daily Check-ins

Scheduled, low-friction check-in prompts with status logging. Flags missed responses for caregiver review without surveillance overhead.

Medication & Appointment Reminders

Configurable reminder schedules for medications, appointments, and recurring tasks. Delivered locally — no cloud dependency required.

Family Alert Routing

Routes status updates and exception alerts to designated family contacts. Configurable escalation paths with optional acknowledgment tracking.

Memory Journal & Context Recall

Persistent local journal for personal notes, preferences, and daily context. Enables AI-assisted recall without sending private data to external services.

Pilot Model Initial pilot model is family-first and caregiver-assisted.
09 — Luma Deployment Ladder

Luma Deployment Ladder

Three deployment tiers, sequenced for access, reliability, and household fit. Each tier is production-capable — not a prerequisite for the next.

Tier 01
Now
Luma Hybrid

Voice channels, caregiver portal, and centralized AI services for rapid household adoption. No local hardware required. Operational from day one.

  • Cloud-assisted AI inference
  • Caregiver coordination portal
  • Voice channel integration
  • Software-first onboarding
Tier 02
Next
Luma Edge Home

Optional local node for privacy-first households and resilient home operations. Runs core Luma workflows on-premise — no cloud dependency for primary functions.

  • Local inference node
  • Offline-capable primary workflows
  • Privacy-first data handling
  • Obsidian Grid edge deployment
Tier 03
Scale
Luma Spark Hubs

Regional GPU hubs for high-concurrency care operations, richer models, and broader household coverage. NVIDIA GB-class hardware at the infrastructure layer.

Designed for high-density, continuous inference workloads across multiple households.

  • NVIDIA GB-class GPU infrastructure
  • High-concurrency multi-household serving
  • Richer contextual model capabilities
  • Regional redundancy and coverage
Hardware strategy is phased to maximize access, reliability, and affordability.
10 — Team

Who's building this

Tim Hogan
Founder & CEO

Systems architecture, local AI infrastructure, and automation orchestration. Builds the technical platform powering Luma CareOS through Obsidian Grid.

Shirley King
Co‑Founder, Care Operations

Lifelong elder-care professional with nursing assistant and phlebotomy background. Leads care workflow design for Luma CareOS, including daily check-ins, medication continuity, family coordination, and dignity-first support practices grounded in real-world caregiving.

Rootstar combines care-operations expertise and AI infrastructure engineering to deliver practical, human-centered systems.

Apply for Early Access

We are onboarding a limited number of early deployments and pilot users. Two paths are open now.

Pilot households start software-first; edge and Spark-enabled tiers follow deployment fit.