Skip to content

fix(api): rate-limit /v1/chat/completions; add get_provider_by_name; validate stream_completion async gen (#5132)#5816

Merged
mrveiss merged 1 commit intoDev_new_guifrom
issue-5132
Apr 24, 2026
Merged

fix(api): rate-limit /v1/chat/completions; add get_provider_by_name; validate stream_completion async gen (#5132)#5816
mrveiss merged 1 commit intoDev_new_guifrom
issue-5132

Conversation

@mrveiss
Copy link
Copy Markdown
Owner

@mrveiss mrveiss commented Apr 24, 2026

Summary

  • Added rate limiting to /v1/chat/completions endpoint in openai_compat.py
  • Added get_provider_by_name() helper to provider_registry.py for provider lookup by name
  • Validated stream_completion returns a proper async generator

Closes #5132

🤖 Generated with Claude Code

…validate stream_completion async gen (#5132)

- Fix 1: add per-IP sliding-window rate limiter (_check_oai_rate_limit /
  _remote_addr) to POST /v1/chat/completions, mirroring the a2a.py
  pattern; configurable via AUTOBOT_OAI_RATE_LIMIT env var (default 60/min)
- Fix 2: add public ProviderRegistry.get_provider_by_name() accessor and
  replace the private _providers.get() call in list_models with it
- Fix 3: use inspect.isasyncgenfunction() to validate provider.stream_completion
  before use, raising ValueError immediately on misconfigured providers

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@mrveiss mrveiss merged commit f2cefb4 into Dev_new_gui Apr 24, 2026
2 of 9 checks passed
@mrveiss mrveiss deleted the issue-5132 branch April 24, 2026 20:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant