GPU-SHARE Phase 3: train submit + cluster-status CLI#401
Open
GPU-SHARE Phase 3: train submit + cluster-status CLI#401
Conversation
APR CPU was 23x slower than llama.cpp because it used the F32 AprTransformer instead of the fused Q4K engine. Now routes through OwnedQuantizedModel (same path as GGUF/SafeTensors), achieving parity with GGUF CPU (~18 tok/s). Wire --trace flag through to AppState.inference_trace for all serve paths. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add `apr gpu` command: displays GPU UUID, VRAM capacity, active reservations, and available budget from the entrenar VRAM ledger - Add `apr gpu --json` for machine-readable output - Add `--wait-gpu <SECS>` flag to `apr finetune`: polls VRAM ledger until sufficient budget is available (GPU-SHARE-003) - Wire wait_gpu parameter through dispatch → finetune::run() Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
#206) Wire --adapters DATA:CHECKPOINT pairs through finetune command to MultiAdapterPipeline. Parses adapter specs, loads independent corpora, creates round-robin adapter slots on shared frozen base model. Also fixes serde_yaml_ng → serde_yaml migration in distill and serve_plan. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… (Refs #206) Call save_adapter_checkpoint() at the end of each epoch for every adapter slot. Each adapter saves metadata.json + model.safetensors to its own checkpoint_dir/epoch-N/ independently. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Implements two new train subcommands for GPU-SHARE Phase 3: - `apr train submit --cluster cluster.yaml --model model.apr --adapter ...` Places adapter jobs across cluster nodes using greedy placement, shows launch commands (local + SSH). Supports --dry-run and --json. - `apr train cluster-status --cluster cluster.yaml` Displays cluster node info, GPUs, VRAM, and adapter capacity. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
GPU-SHARE Phase 3 CLI: Two new
apr trainsubcommands for cluster management.apr train submit: Places adapter jobs across cluster nodes using greedy placement algorithm, generates local and SSH launch commands. Supports--dry-runand--jsonoutput.apr train cluster-status: Displays cluster configuration including nodes, GPUs, VRAM, and adapter capacity.Files changed
train_commands.rs— AddedSubmitandClusterStatusvariantsdispatch_analysis.rs— Added match arms indispatch_train_command()commands/train.rs—run_submit()andrun_cluster_status()implementationsUsage
Test plan
🤖 Generated with Claude Code