Real-time AI perception system optimized for Raspberry Pi Zero W.
Detects, tracks, and describes objects and people through a live camera feed with a tactical HUD overlay.
Camera -> Night Vision -> Detection -> Tracking -> Pose Estimation -> HUD
| Stage | What happens |
|---|---|
| Camera | Captures frames via picamera2, OpenCV, or mock |
| Night Vision | CLAHE contrast enhancement + gamma correction |
| Detection | TFLite SSD MobileNet (primary) or Caffe DNN (fallback) |
| Tracking | Centroid tracker assigns persistent IDs across frames |
| Pose | MediaPipe skeleton overlay when a person is detected |
| Scene AI | Claude (Haiku) generates a 2-3 line tactical HUD description |
| HUD | Bounding boxes, track IDs, skeleton, FPS, and scene text drawn in real time |
| Minimum | Recommended | |
|---|---|---|
| Board | Raspberry Pi Zero W | Raspberry Pi 3B+ / 4 |
| RAM | 512 MB | 1 GB+ |
| Camera | Pi Camera Module v1/v2 or USB webcam | Pi Camera Module v2 |
| Storage | 2 GB free | 4 GB free |
| OS | Raspberry Pi OS Lite (Bookworm) | Raspberry Pi OS Lite (Bookworm) |
Pi Zero W uses ARMv6. MediaPipe is unavailable on ARMv6 — pose estimation is automatically disabled.
git clone https://github.com/vaishcodescape/V.A.R.G.git
cd V.A.R.Gchmod +x scripts/*.sh
./scripts/install.shThis will:
- Install system packages via
apt(Pi only) - Enable camera, I2C, and SPI interfaces (Pi only)
- Create a Python virtual environment at
varg_env/ - Install Python dependencies from
requirements.txt - Download TFLite detection models
- Set up the Waveshare OLED library
- Install a
systemdservice for auto-start on boot (Pi only)
# Add your Anthropic API key for Claude scene intelligence
echo 'ANTHROPIC_API_KEY=sk-ant-...' >> .envAll other settings live in varg/config/settings.yaml.
./scripts/start.sh # normal mode (shows cv2 window)
./scripts/start.sh --headless # no display (SSH / headless Pi)All tuneable parameters are in varg/config/settings.yaml:
camera:
width: 320
height: 240
fps: 10
detection:
confidence_threshold: 0.50
target_classes: [person, car, bicycle, motorbike, dog, cat]
llm:
enabled: true
model: "claude-haiku-4-5-20251001"
update_interval: 10.0 # seconds between Claude calls
performance:
frame_skip: 3 # process every 3rd captured frame
detection_interval: 3 # run detector every 3 processed frames
pose_interval: 2 # run pose every 2 processed frames
headless: falseV.A.R.G/
├── main.py # entry point
├── setup_models.py # downloads TFLite + Caffe models
├── requirements.txt
├── varg/
│ ├── config/
│ │ └── settings.yaml # all runtime configuration
│ ├── hardware/
│ │ ├── camera.py # threaded capture (picamera2 / opencv / mock)
│ │ └── night_vision.py # CLAHE + gamma enhancement
│ ├── vision/
│ │ ├── detection.py # TFLite SSD / Caffe DNN / mock detector
│ │ ├── tracking.py # centroid tracker with persistent IDs
│ │ ├── pose.py # MediaPipe skeleton (ARMv7+ only)
│ │ └── overlay.py # HUD drawing (boxes, labels, skeleton, FPS)
│ ├── ai/
│ │ └── llm.py # two-tier scene intelligence (local + Claude)
│ └── core/
│ └── pipeline.py # orchestrates all subsystems
├── models/ # downloaded by setup_models.py
├── scripts/
│ ├── install.sh # full installation
│ ├── start.sh # activate venv + launch
│ ├── verify.sh # check installation health
│ ├── monitor.sh # live service status + logs
│ ├── troubleshoot.sh # diagnose common problems
│ └── setup_oled.sh # Waveshare OLED library setup
└── Raspberry/ # Waveshare vendor library
Models are downloaded automatically by setup_models.py or by install.sh.
| Model | Size | Speed on Pi Zero W | Notes |
|---|---|---|---|
| TFLite SSD MobileNet v1 (INT8) | ~3 MB | ~1-2 FPS | Primary; requires tflite-runtime |
| Caffe MobileNetSSD | ~23 MB | ~0.5 FPS | Automatic fallback |
| Mock | 0 MB | instant | Used when no model is found |
To download models manually:
source varg_env/bin/activate
python setup_models.py| Script | Purpose |
|---|---|
./scripts/install.sh |
Full install (run once after cloning) |
./scripts/install.sh --skip-models |
Install without downloading models |
./scripts/install.sh --dry-run |
Preview what would be installed |
./scripts/start.sh |
Start V.A.R.G |
./scripts/start.sh --headless |
Start without display window |
./scripts/verify.sh |
Check that everything is installed correctly |
./scripts/monitor.sh |
Show service status, CPU, memory, temperature |
./scripts/troubleshoot.sh |
Diagnose and print fixes for common issues |
The installer registers a varg.service that starts automatically on boot.
sudo systemctl start varg.service # start now
sudo systemctl stop varg.service # stop
sudo systemctl restart varg.service # restart
sudo journalctl -u varg.service -f # live logsResource limits applied by the service: MemoryMax=400M, CPUQuota=80%.
When ANTHROPIC_API_KEY is set, V.A.R.G sends a short scene description to Claude Haiku every 10 seconds (configurable). The response is displayed on the HUD as a 2-3 line tactical summary.
Without an API key, a local rule-based fallback runs instantly with no network dependency:
2 persons detected
Person #1 moving right
Person #2 stationary
V.A.R.G supports the Waveshare 1.51" OLED over SPI for status output on headless Pi deployments.
Setup:
- Place
OLED_Module_Code.7zin the project root - Run
./scripts/setup_oled.sh - Enable SPI:
sudo raspi-config nonint do_spi 0
| Setting | Location | Effect |
|---|---|---|
frame_skip |
settings.yaml |
Higher = fewer frames processed, lower CPU |
detection_interval |
settings.yaml |
Higher = less frequent detection passes |
confidence_threshold |
settings.yaml |
Higher = fewer (but more reliable) detections |
headless: true |
settings.yaml or --headless |
Skips all cv2 drawing, saves ~30% CPU |
arm_freq=1000 |
/boot/config.txt |
Mild overclock; set automatically by installer on Pi Zero W |
./scripts/troubleshoot.sh # automated diagnostics
./scripts/verify.sh # installation health checkCommon issues:
| Problem | Fix |
|---|---|
varg_env not found |
Run ./scripts/install.sh |
No models found |
Run python setup_models.py |
Camera /dev/video0 not found |
sudo raspi-config nonint do_camera 0 && sudo reboot |
SPI not enabled |
sudo raspi-config nonint do_spi 0 && sudo reboot |
| Scene AI not working | Set ANTHROPIC_API_KEY in .env |
| High memory usage | Set headless: true in settings.yaml |
See LICENSE.