npm i -g @codex-infinity/pi-infinity
Pi Infinity is a coding agent that can run forever.
Run locally or on bare metal GPU hardware.
Two flags turn Pi into a fully autonomous coding agent:
--auto-next-steps-- After each response, automatically continues with the next logical steps (including testing)--auto-next-idea-- Generates and implements new improvement ideas for your codebase
# Autonomous coding -- completes tasks then moves to the next one
pinf --auto-next-steps "fix all lint errors and add tests"
# Fully autonomous -- dreams up and implements improvements forever
pinf --auto-next-steps --auto-next-ideanpm install -g @codex-infinity/pi-infinityThen run pinf to get started.
Set your API key for any supported provider:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=...
pinf "your prompt"- Autonomous operation --
--auto-next-stepskeeps it working without intervention - Idea generation --
--auto-next-ideabrainstorms and implements improvements - AnyLLM -- OpenAI, Anthropic, Google, local models, bring your own provider
- Local execution -- runs entirely on your machine
- GPU cloud -- deploy on bare metal GPU hardware for long-running sessions
If you use pi or other coding agents for open source work, please share your sessions.
Public OSS session data helps improve coding agents with real-world tasks, tool use, failures, and fixes instead of toy benchmarks.
For the full explanation, see this post on X.
To publish sessions, use badlogic/pi-share-hf. Read its README.md for setup instructions. All you need is a Hugging Face account, the Hugging Face CLI, and pi-share-hf.
You can also watch this video, where I show how I publish my pi-mono sessions.
I regularly publish my own pi-mono work sessions here:
| Package | Description |
|---|---|
| @codex-infinity/pi-infinity | Interactive coding agent CLI |
| @mariozechner/pi-ai | Unified multi-provider LLM API (OpenAI, Anthropic, Google, etc.) |
| @mariozechner/pi-agent-core | Agent runtime with tool calling and state management |
| @mariozechner/pi-mom | Slack bot that delegates messages to the pi coding agent |
| @mariozechner/pi-tui | Terminal UI library with differential rendering |
| @mariozechner/pi-web-ui | Web components for AI chat interfaces |
| @mariozechner/pi-pods | CLI for managing vLLM deployments on GPU pods |
npm install # Install all dependencies
npm run build # Build all packages
npm run check # Lint, format, and type check
./test.sh # Run tests (skips LLM-dependent tests without API keys)
./pi-test.sh # Run pi from sources (must be run from repo root)See CONTRIBUTING.md for contribution guidelines and AGENTS.md for project-specific rules.
MIT