Draft: Add new providers via adapters#60
Draft: Add new providers via adapters#60andrejvysny wants to merge 19 commits intohuggingface:mainfrom
Conversation
Resolve conflicts by taking upstream for llm_params.py (will be rewritten as thin adapter dispatcher) and main.py (model_switcher extraction supersedes our changes). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Split NativeAdapter into AnthropicAdapter (thinking config + output_config.effort) and OpenAIAdapter (reasoning_effort top-level). Each adapter owns its accepted effort set and raises UnsupportedEffortError in strict mode, preserving the effort_probe cascade with zero changes to effort_probe.py or agent_loop.py. llm_params.py becomes a thin dispatcher delegating to resolve_adapter().build_params() while keeping the litellm effort-validation patch and re-exporting UnsupportedEffortError. model_switcher.py reads suggested models from the adapter registry instead of maintaining a separate SUGGESTED_MODELS list. backend/routes/agent.py replaces AVAILABLE_MODELS with build_model_catalog(). OpenCodeGoAdapter deferred to PR huggingface#60. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fb54391 to
5d357ba
Compare
Restore the existing web model behavior so PR huggingface#55 stays a behavior-preserving refactor while keeping shared runtime and CLI validation logic.
Classify auth, credits, and missing-model failures once so the CLI, model switcher, and health checks show clean user-facing errors instead of raw provider traces.
|
Heads up: this PR builds on #55 and expands the adapter-based provider layer with additional providers / routing support. Overlap notes:
If the final version of this PR includes Ollama/local model support directly, it may also supersede the earlier overlapping attempts in #44 and #47. |
Verdict: DRAFT-WAIT · Risk: MED · State: Draft, depends on #55 Explicitly Draft and stacked on #55 which is still open. Don't merge until #55 lands and this is rebased onto the resulting |
# Conflicts: # agent/core/llm_params.py
Incorporate Bedrock adapter, error handling for missing adapters, upstream main merge, and cleanup from PR#55. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Native LiteLLM adapter with reasoning_effort support (low/medium/high), minimal->low normalization, suggested models (2.5 Pro, Flash). Supersedes PR#95 approach with proper adapter pattern. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@claude please review this PR. Focus on correctness, security, LiteLLM/Bedrock routing impact, and whether it's safe to merge against current main. Keep it concise and prefer inline comments where it matters. |
|
Claude finished @akseljoonas's task in 3m 5s —— View job PR Review: Add new providers via adapters
|

Summary
Verification
python3 -m py_compile agent/core/provider_adapters.py agent/core/llm_params.pyuv run --with pytest pytest tests/test_provider_adapters.py