Warning
This project is under active development. Resources provided (n8n workflows) should be used for experimentation only as no extensive testing has been performed yet.
DevLLMOps is a set of specifications, tools and introduces the AIgile methodology for teams to develop and deploy software at AI speed, keeping humans in the loop where it matters.
Ship fast or die slow. A methodology for AI-native software development in teams.
The traditional SDLC (Requirements > Design > Code > Test > Review > Deploy > Monitor) assumed building was expensive. That constraint is gone. AI agents collapse these stages into a tight loop where intent, code, tests, and deployment converge simultaneously.
Based on OCPA specs for a testable project structure and rooted in DevOps principles for reliability and maintainability.
See devllmops-demo-quizapp for a reference implementation: a quiz app built end-to-end with DevLLMOps + OCPA specs, including CI workflows, AI review, intent-driven issues, and CLAUDE.md/TEAM.md/REVIEW.md configuration.
flowchart TD
A["`**Human Intent**`"] --> B["`**AI Agent**`"]
B --> C["`**Code + Tests**`"]
C --> D["`**Automated CI**
lint, test, security`"]
D --> E["`**AI Agent Review**
adversarial`"]
E --> F{Pass?}
F -- Yes --> G["`**Deploy**`"]
G --> H["`**Observe**`"]
H --> A
F -- "Fail (routine)" --> I["`**Agent fixes**`"]
I --> D
F -- "Fail (novel)" --> J["`**Human Review**`"]
J --> B
Stages don't get faster. They merge. The agent doesn't know what "phase" it's in. There's just intent, context, and iteration.
| Role | Evolved From | Responsibility |
|---|---|---|
| Product Architect | CTO / Tech Lead | Defines intent, sets architecture guardrails, handles exceptions agents can't resolve, makes release decisions |
| Context Engineer | Developer | Crafts agent context, steers AI work, reviews complex/novel changes |
| Quality Sentinel | QA + DevOps | Owns security review of AI output, manages observability and CI/CD, monitors the feedback loop |
| AI Ops Lead (large teams) | New role | Manages agent costs, model selection, prompt optimization, orchestration pipelines |
See Team Roles & Organization for full details, team structures, and transition from Agile.
| Tool | Purpose | Install |
|---|---|---|
| Claude Code | AI agent (default) | npm install -g @anthropic-ai/claude-code |
| Docker | Containerization | See docs |
| Make | Command standardization | apt install make / brew install make |
| n8n | Workflow automation | Host org-wide or self-host with Docker |
| Gitleaks | Secrets scanning | Runs in CI (GitHub Actions) |
See Tooling Setup for full installation and configuration.
- Enable branch protection on
mainandrelease(require status checks, no force-push) - Set up GitHub Projects board:
Backlog>Ready>AI Ready>In Progress>Verification>Human Review>Done - Add secrets:
ANTHROPIC_API_KEY, and optionallyKUBE_CONFIG,SCW_ACCESS_KEY/SCW_SECRET_KEY - Copy the provided GitHub Actions workflows and issue template
See GitHub Setup for step-by-step configuration.
.
├── .github/ # CI/CD pipelines (guardrails) and templates
├── app/ # Service(s) with Dockerfile
├── docs/ # All documentation (except README.md)
├── k8s/ # Helm chart (if using K8s)
├── scripts/ # POSIX deployment scripts
├── compose.base.yml # Shared service config
├── compose.dev.yml # Dev environment
├── compose.prod.yml # Production environment
├── compose.test.yml # Test environment
├── Makefile # Standardized commands
├── CLAUDE.md # Agent context (architecture, conventions)
├── TEAM.md # Team roster for agent review routing
├── REVIEW.md # Team roster for agent review routing
├── VERSION # Semantic version
└── .env.example # Environment variables template
See OCPA Specs for full conventions (versioning, Dockerfiles, Makefile commands, env validation).
Templates available: CLAUDE.md, TEAM.md, REVIEW.md.
Open a GitHub Issue using the intent template:
Intent: Add user authentication with JWT
Context: We need login/signup for the API. Using PostgreSQL for storage.
Acceptance: POST /auth/login returns a JWT. Protected routes return 401 without token.
Constraints: Tokens expire after 1h. Use bcrypt for password hashing.
Then let the agent work. Steer, iterate, ship.
AI agents consume tokens at scale. A single Context Engineer running Claude can use $30-150+/day in API costs. Multiply by team size.
Monthly cost for a team of 4: $3,000-15,000+ in AI tokens alone, on top of infrastructure and salaries.
Mitigations:
- Set hard budget limits on your AI provider dashboard
- Use cheaper models (Haiku) for routine tasks, expensive models (Opus) for complex reasoning
- Monitor usage daily -- there is no "unlimited plan"
- See Cost Management for detailed strategies
This spec defaults to Claude via Claude Code CLI. To use an alternative:
| Tool | Type | Notes |
|---|---|---|
| aider | CLI agent | Works with any OpenAI-compatible API (OpenRouter, Ollama, local models) |
| Continue.dev | IDE extension | VS Code/JetBrains, any OpenAI-compatible endpoint |
| OpenHands | Agent platform | Open-source, self-hosted |
Any model accessible via an OpenAI-compatible API (GPT-4, Llama, Mistral, DeepSeek) can replace Claude. Adapt the CLAUDE.md to your agent's context mechanism (e.g., .aider.conf.yml for aider).
| Document | Description |
|---|---|
| Methodology | The DevLLMOps workflow in detail |
| Team Roles & Organization | Roles, team structures, transition from Agile |
| Tooling Setup | Installation and configuration for all tools |
| n8n Setup | n8n deployment and automation workflows |
| GitHub Setup | Repository, Actions, Projects configuration |
| Security | Securing AI-generated code |
| Cost Management | Token cost control strategies |
MIT -- See LICENSE

