Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
91c6c42
feat: migrate Coral TPU skill from pycoral to ai-edge-litert (LiteRT)
solderzzc Mar 25, 2026
0865181
docs: update README to reflect native LiteRT deployment for all detec…
solderzzc Mar 25, 2026
e16a15a
docs: fix README — remove 'all skills run natively' claim (OpenVINO s…
solderzzc Mar 25, 2026
d4c75ce
docs: add Face Recognition and License Plate Recognition to skill cat…
solderzzc Mar 25, 2026
91bd79d
feat: add Coral TPU detection skill to registry (skills.json)
solderzzc Mar 28, 2026
9c59788
fix: add model download step to Coral TPU deploy.sh
solderzzc Mar 28, 2026
b7df5e0
feat: add pre-compiled SSD MobileNet V2 Edge TPU model (6.7MB)
solderzzc Mar 28, 2026
57dc976
fix: add CPU-only fallback model for Coral TPU skill
solderzzc Mar 28, 2026
c5f0d40
refactor: switch Coral TPU deployment strategy back to Docker with US…
solderzzc Mar 28, 2026
52c0486
refactor(skills): coral tpu now uses js monitor for host entry tracking
solderzzc Mar 28, 2026
b8f4ebe
fix(yolo-coral-tpu): synchronize host frame map to absolute container…
solderzzc Mar 28, 2026
991c9e8
feat(yolo-coral-tpu): explicit usb requirement in manifest
solderzzc Mar 28, 2026
c60e761
refactor(yolo-coral-tpu): migrate from docker to native python execution
solderzzc Mar 28, 2026
51a107e
refactor(yolo-coral-tpu): remove monitor.js in favor of direct python…
solderzzc Mar 28, 2026
ec559cf
feat(yolo-coral-tpu): migrate completely to native inline OS GUI driv…
solderzzc Mar 28, 2026
cdd9651
feat(yolo-coral-tpu): implement multi-stage native macOS authorizatio…
solderzzc Mar 28, 2026
755d5e4
fix(coral-tpu): correct Apple Silicon driver path for libedgetpu.1.dylib
solderzzc Mar 28, 2026
829cc14
fix(yolo-coral-tpu): wrap ask_sudo cmd in bash -c to preserve privile…
solderzzc Mar 28, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 25 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,10 @@ Each skill is a self-contained module with its own model, parameters, and [commu
| Category | Skill | What It Does | Status |
|----------|-------|--------------|:------:|
| **Detection** | [`yolo-detection-2026`](skills/detection/yolo-detection-2026/) | Real-time 80+ class detection — auto-accelerated via TensorRT / CoreML / OpenVINO / ONNX | ✅|
| | [`yolo-detection-2026-coral-tpu`](skills/detection/yolo-detection-2026-coral-tpu/) | Google Coral Edge TPU — ~4ms inference via USB accelerator ([Docker-based](#detection--segmentation-skills)) | 🧪 |
| | [`yolo-detection-2026-openvino`](skills/detection/yolo-detection-2026-openvino/) | Intel NCS2 USB / Intel GPU / CPU — multi-device via OpenVINO ([Docker-based](#detection--segmentation-skills)) | 🧪 |
| | [`yolo-detection-2026-coral-tpu`](skills/detection/yolo-detection-2026-coral-tpu/) | Google Coral Edge TPU — ~4ms inference via USB accelerator ([LiteRT](#detection--segmentation-skills)) | 🧪 |
| | [`yolo-detection-2026-openvino`](skills/detection/yolo-detection-2026-openvino/) | Intel NCS2 USB / Intel GPU / CPU — multi-device via OpenVINO ([architecture](#detection--segmentation-skills)) | 🧪 |
| | `face-detection-recognition` | Face detection & recognition — identify known faces from camera feeds | 📐 |
| | `license-plate-recognition` | License plate detection & recognition — read plate numbers from camera feeds | 📐 |
| **Analysis** | [`home-security-benchmark`](skills/analysis/home-security-benchmark/) | [143-test evaluation suite](#-homesec-bench--how-secure-is-your-local-ai) for LLM & VLM security performance | ✅ |
| **Privacy** | [`depth-estimation`](skills/transformation/depth-estimation/) | [Real-time depth-map privacy transform](#-privacy--depth-map-anonymization) — anonymize camera feeds while preserving activity | ✅ |
| **Segmentation** | [`sam2-segmentation`](skills/segmentation/sam2-segmentation/) | Interactive click-to-segment with Segment Anything 2 — pixel-perfect masks, point/box prompts, video tracking | ✅ |
Expand All @@ -74,38 +76,41 @@ Each skill is a self-contained module with its own model, parameters, and [commu

### Detection & Segmentation Skills

Detection and segmentation skills process visual data from camera feeds — detecting objects, segmenting regions, or analyzing scenes. All skills use the same **JSONL stdin/stdout protocol**: Aegis writes a frame to a shared volume, sends a `frame` event on stdin, and reads `detections` from stdout. This means every detection skill — whether running natively or inside Docker — is interchangeable from Aegis's perspective.
Detection and segmentation skills process visual data from camera feeds — detecting objects, segmenting regions, or analyzing scenes. All skills use the same **JSONL stdin/stdout protocol**: Aegis writes a frame to a shared volume, sends a `frame` event on stdin, and reads `detections` from stdout. Every detection skill is interchangeable from Aegis's perspective.

```mermaid
graph TB
CAM["📷 Camera Feed"] --> GOV["Frame Governor (5 FPS)"]
GOV --> |"frame.jpg → shared volume"| PROTO["JSONL stdin/stdout Protocol"]

PROTO --> NATIVE["🖥️ Native: yolo-detection-2026"]
PROTO --> DOCKER["🐳 Docker: Coral TPU / OpenVINO"]
PROTO --> YOLO["yolo-detection-2026"]
PROTO --> CORAL["yolo-detection-2026-coral-tpu"]
PROTO --> OV["yolo-detection-2026-openvino"]

subgraph Native["Native Skill (runs on host)"]
NATIVE --> ENV["env_config.py auto-detect"]
subgraph Backends["Skill Backends"]
YOLO --> ENV["env_config.py auto-detect"]
ENV --> TRT["NVIDIA → TensorRT"]
ENV --> CML["Apple Silicon → CoreML"]
ENV --> OV["Intel → OpenVINO IR"]
ENV --> OVIR["Intel → OpenVINO IR"]
ENV --> ONNX["AMD / CPU → ONNX"]
end

subgraph Container["Docker Container"]
DOCKER --> CORAL["Coral TPU → pycoral"]
DOCKER --> OVIR["OpenVINO → Ultralytics OV"]
DOCKER --> CPU["CPU fallback"]
CORAL -.-> USB["USB/IP passthrough"]
OVIR -.-> DRI["/dev/dri · /dev/bus/usb"]
CORAL --> LITERT["ai-edge-litert + libedgetpu"]
LITERT --> TPU["Coral USB → Edge TPU delegate"]
LITERT --> CPU1["No TPU → CPU fallback"]

OV --> OVSDK["OpenVINO SDK"]
OVSDK --> NCS2["Intel NCS2 USB"]
OVSDK --> IGPU["Intel iGPU / Arc"]
OVSDK --> CPU2["CPU fallback"]
end

NATIVE --> |"stdout: detections"| AEGIS["Aegis IPC → Live Overlay + Alerts"]
DOCKER --> |"stdout: detections"| AEGIS
YOLO --> |"stdout: detections"| AEGIS["Aegis IPC → Live Overlay + Alerts"]
CORAL --> |"stdout: detections"| AEGIS
OV --> |"stdout: detections"| AEGIS
```

- **Native skills** run directly on the host — [`env_config.py`](skills/lib/env_config.py) auto-detects the GPU and converts models to the fastest format (TensorRT, CoreML, OpenVINO IR, ONNX)
- **Docker skills** wrap hardware-specific runtimes in a container — cross-platform USB/device access without native driver installation
- **Unified protocol** — each skill creates its own Python venv or Docker container, but Aegis sees the same JSONL interface regardless of backend
- **Coral TPU** uses [ai-edge-litert](https://pypi.org/project/ai-edge-litert/) (LiteRT) with the `libedgetpu` delegate — supports Python 3.9–3.13 on Linux, macOS, and Windows
- **Same output** — Aegis sees identical JSONL from all skills, so detection overlays, alerts, and forensic analysis work with any backend

#### LLM-Assisted Skill Installation
Expand All @@ -114,7 +119,7 @@ Skills are installed by an **autonomous LLM deployment agent** — not by brittl

1. **Probe** — reads `SKILL.md`, `requirements.txt`, and `package.json` to understand what the skill needs
2. **Detect hardware** — checks for NVIDIA (CUDA), AMD (ROCm), Apple Silicon (MPS), Intel (OpenVINO), or CPU-only
3. **Install** — runs the right commands (`pip install`, `npm install`, `docker build`) with the correct backend-specific dependencies
3. **Install** — runs the right commands (`pip install`, `npm install`, system packages) with the correct backend-specific dependencies
4. **Verify** — runs a smoke test to confirm the skill loads before marking it complete
5. **Determine launch command** — figures out the exact `run_command` to start the skill and saves it to the registry

Expand Down
48 changes: 48 additions & 0 deletions skills.json
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,54 @@
"large"
]
},
{
"id": "yolo-detection-2026-coral-tpu",
"name": "YOLO 2026 Coral TPU",
"description": "Google Coral Edge TPU — real-time object detection with LiteRT (INT8, ~4ms inference at 320×320).",
"version": "2.0.0",
"category": "detection",
"path": "skills/detection/yolo-detection-2026-coral-tpu",
"tags": [
"detection",
"yolo",
"coral",
"edge-tpu",
"litert",
"real-time",
"coco"
],
"platforms": [
"linux-x64",
"linux-arm64",
"darwin-arm64",
"darwin-x64",
"win-x64"
],
"requirements": {
"python": ">=3.9",
"system": "libedgetpu",
"hardware": "Google Coral USB Accelerator"
},
"capabilities": [
"live_detection",
"bbox_overlay"
],
"ui_unlocks": [
"detection_overlay",
"detection_results"
],
"fps_presets": [
0.2,
0.5,
1,
3,
5,
15
],
"model_sizes": [
"nano"
]
},
{
"id": "camera-claw",
"name": "Camera Claw",
Expand Down
53 changes: 0 additions & 53 deletions skills/detection/yolo-detection-2026-coral-tpu/Dockerfile

This file was deleted.

54 changes: 22 additions & 32 deletions skills/detection/yolo-detection-2026-coral-tpu/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
---
name: yolo-detection-2026-coral-tpu
description: "Google Coral Edge TPU — real-time object detection via Docker"
description: "Google Coral Edge TPU — real-time object detection natively via local Python environment"
version: 1.0.0
icon: assets/icon.png
entry: scripts/detect.py
entry: scripts/monitor.js
deploy: deploy.sh
runtime: docker
runtime: node

requirements:
docker: ">=20.10"
platforms: ["linux", "macos", "windows"]



parameters:
- name: auto_start
label: "Auto Start"
Expand Down Expand Up @@ -78,13 +79,13 @@ mutex: detection

# Coral TPU Object Detection

Real-time object detection using Google Coral Edge TPU accelerator. Runs inside Docker for cross-platform support. Detects 80 COCO classes (person, car, dog, cat, etc.) with ~4ms inference on 320×320 input.
Real-time object detection natively utilizing the Google Coral Edge TPU accelerator on your local hardware. Detects 80 COCO classes (person, car, dog, cat, etc.) with ~4ms inference on 320x320 input.

## Requirements

- **Google Coral USB Accelerator** (USB 3.0 port recommended)
- **Docker Desktop 4.35+** (all platforms — Linux, macOS, Windows)
- USB 3.0 cable (the included cable is recommended)
- **libusb** framework (installed automatically on Linux/macOS)
- Python 3 with the native `pycoral` environment
- Adequate cooling for sustained inference

## How It Works
Expand All @@ -94,61 +95,50 @@ Real-time object detection using Google Coral Edge TPU accelerator. Runs inside
│ Host (Aegis-AI) │
│ frame.jpg → /tmp/aegis_detection/ │
│ stdin ──→ ┌──────────────────────────────┐ │
│ │ Docker Container │ │
│ │ Native Python Environment │ │
│ │ detect.py │ │
│ │ ├─ loads _edgetpu.tflite │ │
│ │ ├─ reads frame from volume │ │
│ │ ├─ reads frame from disk │ │
│ │ └─ runs inference on TPU │ │
│ stdout ←── │ → JSONL detections │ │
│ └──────────────────────────────┘ │
│ USB ──→ /dev/bus/usb (Linux) or USB/IP (Mac/Win)
│ USB ──→ Native System USB / edgetpu drivers
└─────────────────────────────────────────────────────┘
```

1. Aegis writes camera frame JPEG to shared `/tmp/aegis_detection/` volume
2. Sends `frame` event via stdin JSONL to Docker container
3. `detect.py` reads frame, runs inference on Edge TPU
1. Aegis writes camera frame JPEG to shared `/tmp/aegis_detection/` workspace
2. Sends `frame` event via stdin JSONL to the local Python instance
3. `detect.py` invokes PyCoral and executes natively on the mapped USB Edge TPU
4. Returns `detections` event via stdout JSONL
5. Same protocol as `yolo-detection-2026` — Aegis sees no difference

## Platform Setup

### Linux
```bash
# USB Coral should be auto-detected
# Docker uses --device /dev/bus/usb for direct access
# Uses the official apt-get google-coral packages natively
./deploy.sh
```

### macOS (Docker Desktop 4.35+)
### macOS
```bash
# Docker Desktop USB/IP handles USB passthrough
# ARM64 Docker image builds natively on Apple Silicon
# Downloads and installs the libedgetpu OS payload framework inline
./deploy.sh
```

### Windows
```powershell
# Docker Desktop 4.35+ with USB/IP support
# Or WSL2 backend with usbipd-win
# Installs directly to the Microsoft runtime
.\deploy.bat
```

## Model

Ships with pre-compiled `yolo26n_edgetpu.tflite` (YOLO 2026 nano, INT8 quantized, 320×320). To compile custom models:

```bash
# Requires x86_64 Linux or Docker --platform linux/amd64
python scripts/compile_model.py --model yolo26s --size 320
```
> **Important Deployment Notice**: The updated `deploy.sh` script will natively halt execution and prompt you securely for your OS `sudo` password to securely register the USB drivers (`libedgetpu`) system-wide. If you refuse the prompt, it gracefully outputs the exact terminal instructions for you to configure it manually.

## Performance

| Input Size | Inference | On-chip | Notes |
|-----------|-----------|---------|-------|
| 320×320 | ~4ms | 100% | Fully on TPU, best for real-time |
| 640×640 | ~20ms | Partial | Some layers on CPU (model segmented) |
| 320x320 | ~4ms | 100% | Fully on TPU, best for real-time |
| 640x640 | ~20ms | Partial | Some layers on CPU (model segmented) |

> **Cooling**: The USB Accelerator aluminum case acts as a heatsink. If too hot to touch during continuous inference, it will thermal-throttle. Consider active cooling or `clock_speed: standard`.

Expand All @@ -172,4 +162,4 @@ Same JSONL as `yolo-detection-2026`:
./deploy.sh
```

The deployer builds the Docker image locally, probes for TPU devices, and sets the runtime command. No packages pulled from external registries beyond Docker base images and Coral apt repo.
The deployer builds the local native Python virtual environment inline with global TPU hooks. No Docker containers or abstract container-bindings are used.
Loading