Skip to content

GPU-SHARE Phase 3: train submit + cluster-status CLI#401

Open
noahgift wants to merge 5 commits intomainfrom
gpu-share-phase3-cli
Open

GPU-SHARE Phase 3: train submit + cluster-status CLI#401
noahgift wants to merge 5 commits intomainfrom
gpu-share-phase3-cli

Conversation

@noahgift
Copy link
Contributor

@noahgift noahgift commented Mar 4, 2026

Summary

GPU-SHARE Phase 3 CLI: Two new apr train subcommands for cluster management.

  • apr train submit: Places adapter jobs across cluster nodes using greedy placement algorithm, generates local and SSH launch commands. Supports --dry-run and --json output.
  • apr train cluster-status: Displays cluster configuration including nodes, GPUs, VRAM, and adapter capacity.

Files changed

  • train_commands.rs — Added Submit and ClusterStatus variants
  • dispatch_analysis.rs — Added match arms in dispatch_train_command()
  • commands/train.rsrun_submit() and run_cluster_status() implementations

Usage

# Place 3 adapters across cluster
apr train submit --cluster cluster.yaml --model model.apr \
  --adapter data/a.jsonl:ckpt/a --adapter data/b.jsonl:ckpt/b \
  --budget-mb 6000 --dry-run

# Show cluster capacity
apr train cluster-status --cluster cluster.yaml

Test plan

  • No new compilation errors (pre-existing serde_yaml_ng issue unrelated)
  • PMAT complexity gates pass
  • Integration test with entrenar cluster module

🤖 Generated with Claude Code

noahgift and others added 5 commits March 4, 2026 13:29
APR CPU was 23x slower than llama.cpp because it used the F32 AprTransformer
instead of the fused Q4K engine. Now routes through OwnedQuantizedModel
(same path as GGUF/SafeTensors), achieving parity with GGUF CPU (~18 tok/s).
Wire --trace flag through to AppState.inference_trace for all serve paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add `apr gpu` command: displays GPU UUID, VRAM capacity, active
  reservations, and available budget from the entrenar VRAM ledger
- Add `apr gpu --json` for machine-readable output
- Add `--wait-gpu <SECS>` flag to `apr finetune`: polls VRAM ledger
  until sufficient budget is available (GPU-SHARE-003)
- Wire wait_gpu parameter through dispatch → finetune::run()

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
#206)

Wire --adapters DATA:CHECKPOINT pairs through finetune command to
MultiAdapterPipeline. Parses adapter specs, loads independent corpora,
creates round-robin adapter slots on shared frozen base model.

Also fixes serde_yaml_ng → serde_yaml migration in distill and serve_plan.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… (Refs #206)

Call save_adapter_checkpoint() at the end of each epoch for every adapter
slot. Each adapter saves metadata.json + model.safetensors to its own
checkpoint_dir/epoch-N/ independently.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Implements two new train subcommands for GPU-SHARE Phase 3:
- `apr train submit --cluster cluster.yaml --model model.apr --adapter ...`
  Places adapter jobs across cluster nodes using greedy placement,
  shows launch commands (local + SSH). Supports --dry-run and --json.
- `apr train cluster-status --cluster cluster.yaml`
  Displays cluster node info, GPUs, VRAM, and adapter capacity.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant