fix: rewrite Copilot driver with OAuth device flow authentication#1017
Merged
jaberjaber23 merged 9 commits intoRightNow-AI:mainfrom Apr 10, 2026
Merged
Conversation
The Copilot LLM driver was broken - it expected users to provide a GITHUB_TOKEN env var, but no standard token type (PAT, gh CLI token) works with the Copilot token exchange endpoint. Changes: - Full rewrite of copilot.rs with OAuth device flow using Copilot's client ID (Iv1.b507a08c87ecfe98) - Three-layer token chain: ghu_ (8h) -> Copilot API token (30min), with automatic caching and refresh - Dynamic model fetching from Copilot API on daemon startup and on model_not_supported error - Init wizard: TUI auth screen with device code display, live model picker after authentication - set-key command: interactive device flow for github-copilot provider - Doctor: detects Copilot auth via persisted token file - Removed static Copilot model entries (now fetched dynamically) - Simplified driver instantiation (no env vars needed) Tested end-to-end with Copilot Enterprise: auth, token exchange, 43 models fetched, completions working with claude-opus-4.6-1m. Closes RightNow-AI#1014 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…pilot proxy The Copilot API proxy rejects conversations ending with an empty assistant message as unsupported 'assistant message prefill' when proxying Claude and Gemini models. GPT models are unaffected. Strips trailing empty assistant messages (no content, no tool calls) before sending the request. Applied in both complete() and stream() paths. Also reverts unused fixup_request method from copilot.rs since the fix belongs in the OpenAI driver layer. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…stant Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The Copilot API proxy can sometimes deliver streaming tool call chunks without a function name, resulting in empty-name tool calls stored in conversation history. When replayed to the API, these cause 'tool call must have a tool call ID and function name' errors. Skip malformed tool calls (empty ID or name) during streaming response finalization and log a warning. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Strengthens the strip to remove any trailing assistant message (not just empty ones) when it has no tool_calls. The Copilot proxy for Claude rejects conversations ending with any assistant message as unsupported prefill. This fixes the Telegram bot channel where the agent loop appends an assistant message with content. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The aggressive strip (all trailing assistant messages) caused infinite agent loops by removing non-empty responses the agent loop needs. Reverted to only strip truly empty assistant messages (no content, no tool_calls). The Telegram prefill issue needs a different fix. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The Copilot proxy for Claude enforces Anthropic's rule that conversations must end with a user message. For Claude models, strip any trailing assistant message without tool_calls (including non-empty ones). For other models, only strip truly empty assistant messages. This fixes the 'assistant message prefill not supported' error seen in Telegram and other channel adapters when using Claude via Copilot, without causing infinite agent loops for other models. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The model-aware assistant strip caused infinite agent loops for Claude. Reverted to empty-only strip which is safe for all models. The Telegram prefill issue needs to be fixed in the agent loop, not the driver. Remaining openai.rs changes: - strip_trailing_empty_assistant: strips truly empty trailing messages - Skip tool calls with empty ID or name from streaming responses Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Complete rewrite of the GitHub Copilot LLM driver to fix broken authentication and add proper OAuth device flow support.
Before: The driver expected a
GITHUB_TOKENenv var, but no standard token type (PAT, fine-grained PAT,ghCLI token) works with the Copilot token exchange endpoint. Users couldn't use Copilot at all.After: Zero-config authentication via OAuth device flow. User runs
openfang config set-key github-copilotor selects Copilot duringopenfang init, authorizes in their browser, and it just works.What changed
Driver (
copilot.rs) — full rewriteIv1.b507a08c87ecfe98)ghu_access token → Copilot API token (tid=...), both cached automatically~/.openfang/.copilot-tokens.json(0600 permissions)Editor-Version/Editor-Plugin-Version/Copilot-Integration-Idheadersendpoints.api) — works with Individual, Business, and Enterprise plansDynamic model catalog
{endpoints.api}/modelson daemon startupmodel_not_supportederror (retry once with updated list)copilot/gpt-4oandcopilot/gpt-4static entriesmerge_discovered_models()to populate catalog at runtimeInit wizard (
init_wizard.rs)CopilotAuthTUI step between provider selection and model picker[Enter]to open browser (doesn't auto-open)CLI integration
openfang config set-key github-copilot— runs interactive device flowopenfang doctor— checks for Copilot auth token file, shows ✔ if authenticatedmod.rs— no env vars neededModel catalog cleanup
copilot→gpt-4o)Testing
Tested end-to-end on Windows (x64 cross-compiled) with GitHub Copilot Enterprise:
copilot_internal/v2/token)openfang initfull wizard flowopenfang config set-key github-copilotopenfang doctoropenfang start(daemon boot with model fetch)/modelswitcher in chat TUIWhy not use the Copilot SDK?
The official
@github/copilot-sdkis TypeScript/Node and spawns the Copilot CLI as a subprocess. OpenFang is Rust and only needs Copilot as a completion endpoint. Direct API integration is simpler and has no Node.js dependency.Why this client ID?
Iv1.b507a08c87ecfe98is the VS Code Copilot extension's OAuth App client ID. Thecopilot_internal/v2/tokenendpoint only acceptsghu_tokens from this specific app — custom OAuth Apps producegho_tokens which are rejected with 404. This is the same approach used by shell-ask, aider, continue.dev, and every other non-SDK Copilot integration.Closes #1014