Skip to content

fix(ai-sdk): preserve reasoning parts in message conversion#11196

Open
hannesrudolph wants to merge 3 commits intomainfrom
fix/ai-sdk-reasoning
Open

fix(ai-sdk): preserve reasoning parts in message conversion#11196
hannesrudolph wants to merge 3 commits intomainfrom
fix/ai-sdk-reasoning

Conversation

@hannesrudolph
Copy link
Collaborator

@hannesrudolph hannesrudolph commented Feb 4, 2026

Problem

When using AI-SDK-backed providers (notably DeepSeek deepseek-reasoner) across tool-call turns, the provider may require the assistant's reasoning to be round-tripped (e.g. via reasoning_content). We were dropping that reasoning during our Anthropic → AI SDK message conversion, which can cause DeepSeek to reject follow-up requests after a tool call.

Context: DeepSeek “thinking mode” requires returning reasoning_content in subsequent requests within the same turn when tool calls are involved.

Closes #11199

What changed

1) Preserve reasoning through the AI SDK conversion layer

  • convertToAiSdkMessages() now converts:
    • our stored { type: "reasoning", text: string } content blocks → AI SDK { type: "reasoning", text } parts
    • Anthropic extended thinking blocks { type: "thinking", thinking, signature } → AI SDK { type: "reasoning", text } parts
    • message-level reasoning_content (when present) → a canonical AI SDK reasoning part (avoids duplicating with content blocks)

This keeps the AI SDK ModelMessage stream complete, so the provider package (e.g. @ai-sdk/deepseek) can decide what to send to the native API (including mapping reasoning parts back into reasoning_content).

2) Store reasoning as structured content in task history (instead of <think> tags)

  • Task.addToApiConversationHistory() now persists reasoning as a dedicated first assistant content block (type: "reasoning"), rather than embedding <think>...</think> into the text.

This enables consistent round-tripping and avoids mixing reasoning into user-visible text.

3) Cleanup

  • Removed invalid "openai-compatible" from the API-provider reasoning allowlist (it is not a ProviderName; it’s used for code-index embedding).

Tests

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. bug Something isn't working labels Feb 4, 2026
@roomote
Copy link
Contributor

roomote bot commented Feb 4, 2026

Oroocle Clock   See task on Roo Cloud

Review status: 1 issue remaining after latest commit.

  • Remove/rename the "openai-compatible" entry in the AI-SDK provider allowlist since it is not a valid ProviderName and is currently dead code.
  • Preserve reasoning_content when Anthropic.Messages.MessageParam uses string content (currently dropped in convertToAiSdkMessages).
Previous reviews

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

// Preserve plain-text reasoning blocks for:
// - models explicitly opting in via preserveReasoning
// - AI SDK providers (provider packages decide what to include in the native request)
const aiSdkProviders = new Set([
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to figure this out dynamically? I’m worried about forgetting to update this when we add them

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AI SDK message conversion loses assistant reasoning/thinking content

2 participants