Skip to content

autonet-code/chevin

Repository files navigation

Chevin

Personal AI infrastructure that learns who you are and acts on your behalf.

Chevin is a daemon that runs on your machine (or your boat, or your server) and serves as your persistent AI assistant. It builds a model of your values, goals, and working style, then coordinates AI agents to help you achieve what matters.

"The daemon isn't a client. It's the protocol."

What Makes Chevin Different

Most AI assistants are stateless. You chat, they respond, the context disappears. Every conversation starts from zero.

Chevin accumulates understanding. Through an onboarding process and ongoing interactions, it builds a world model of you - your cognitive tendencies, your values, your goals. This model persists and informs every future interaction.

Most AI assistants are centralized. Your data lives on someone else's servers. Your access depends on their pricing, policies, and continued existence.

Chevin runs locally and will run decentralized. Today it connects to Claude, Gemini, or OpenAI. Tomorrow it connects to Autonet - a decentralized network where inference and training happen on nodes you control or trust.

The Autonet Ecosystem

Chevin is the application layer of a larger stack:

Layer Component What It Does
Application Chevin Personal AI assistant with persistent world model
Interface Sidekick Flutter web app for interacting with Chevin
Network Autonet Decentralized AI training, inference, and governance
Physical Autonet Boats Autonomous vessels, each running a Chevin node

Every autonomous boat on autonet.boats runs a Chevin daemon. The boat's AI isn't just navigation - it's a full agent that can manage bookings, communicate with passengers, coordinate with shore operations, and participate in the Autonet network.

Core Features

World Model

Through conversational onboarding, Chevin extracts your:

  • Standards - Core values and principles that guide your decisions
  • Goals - What you're working toward, across different timeframes
  • Projects - Concrete initiatives that move you toward your goals
  • Deviations - How you differ from baseline human assumptions (epistemic, motivational, axiological patterns)

This isn't a user profile. It's a psychological model that helps Chevin understand why you make the choices you do.

Multi-Agent Orchestration

Chevin coordinates specialized agents:

  • Project agents - Deep context on specific codebases
  • Task agents - Focused work on specific objectives
  • Sub-agents - Fractal delegation for complex tasks

Agents form hierarchies, share context, and can be interrupted or redirected in real-time.

Multi-Provider Backends

Switch between LLM providers without changing your workflow:

  • Claude Agent SDK - Full Claude Code capabilities via subprocess
  • OpenCode - Multi-provider HTTP server (Gemini, OpenAI, Claude, local models)

Backend selection happens in the UI. No config files, no restarts.

Service Integration

Connect external services that Chevin can use on your behalf:

  • Calendar, email, task managers
  • Code repositories, CI/CD
  • Custom APIs and automations

The more services you connect, the more Chevin can do autonomously.

Quick Start

Option 1: Download (Recommended)

  1. Download Chevin.exe from releases
  2. Double-click to run
  3. Open autonet.computer and connect

Option 2: Run from Source

git clone https://github.com/autonet-code/chevin.git
cd chevin
pip install -e .
chevin --daemon

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                    Sidekick Web App                              │
│                    (autonet.computer)                            │
└──────────────────────────┬──────────────────────────────────────┘
                           │ WebSocket / HTTP
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│                      Chevin Daemon                               │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐  │
│  │  World Model    │  │  Agent Registry │  │  Task Board     │  │
│  │  (who you are)  │  │  (coordination) │  │  (what to do)   │  │
│  └─────────────────┘  └─────────────────┘  └─────────────────┘  │
│                           │                                      │
│  ┌─────────────────────────────────────────────────────────────┐│
│  │  Orchestrator                                                ││
│  │  • Claude SDK backend (subprocess, Claude Code CLI)          ││
│  │  • OpenCode backend (HTTP, multi-provider)                   ││
│  └─────────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│                    LLM Providers (Today)                         │
│         Claude  │  Gemini  │  OpenAI  │  Local Models            │
└─────────────────────────────────────────────────────────────────┘
                           │
                           ▼ (Future)
┌─────────────────────────────────────────────────────────────────┐
│                    Autonet Network                               │
│  Decentralized training, inference, and constitutional governance│
└─────────────────────────────────────────────────────────────────┘

API

Local API at http://127.0.0.1:8420:

Endpoint Description
GET /api/health Health check
GET /api/codebases List registered codebases
GET /api/tasks Hierarchical task board
GET /api/agents Active agent hierarchy with PIDs
GET /api/config/backend Current LLM backend configuration
POST /api/config/backend Switch backends (claude-sdk/opencode)
WebSocket /ws Real-time events and chat

Full OpenAPI spec at /docs when daemon is running.

The Path to Decentralization

Today, Chevin is a local daemon that calls centralized APIs. This is the bootstrap phase - building something useful that people actually want to run.

The daemon is designed to evolve into an Autonet node:

  1. Your interactions improve models - Training data stays private but contributes to collective intelligence
  2. Your node can serve inference - Earn tokens by providing compute
  3. Your node participates in governance - Constitutional principles constrain all network decisions
  4. Your world model stays yours - Encrypted, portable, under your control

The goal isn't "decentralization for its own sake." It's AI infrastructure that can't be enshittified, that serves users rather than shareholders, that accumulates value for the people who use it.

Development

# Install with dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run daemon in development mode
chevin --daemon --verbose

See CLAUDE.md for development guidelines and architecture details.

Related Projects

License

MIT

About

Chevin Daemon - Multi-Project Agent Orchestrator

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages