Skip to content

[Optimization] Use triton qk_norm both in Prefill and Decode.#7213

Merged
zhoutianzi666 merged 1 commit intoPaddlePaddle:developfrom
K11OntheBoat:fused_rms_triton
Apr 10, 2026
Merged

[Optimization] Use triton qk_norm both in Prefill and Decode.#7213
zhoutianzi666 merged 1 commit intoPaddlePaddle:developfrom
K11OntheBoat:fused_rms_triton

Conversation

@K11OntheBoat
Copy link
Copy Markdown
Collaborator

Motivation

Prefill 阶段使用QKRMSNorm融合算子. 部分模型 单Kernel部分加速2~7倍. Prefill 空泡较大的模型单次Forward可加速2倍左右.

Modifications

使用QKRMSNorm 替代paddle 散Op

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@K11OntheBoat K11OntheBoat marked this pull request as ready for review April 7, 2026 08:55
@paddle-bot
Copy link
Copy Markdown

paddle-bot bot commented Apr 7, 2026

Thanks for your contribution!

@paddle-bot paddle-bot bot added the contributor External developers label Apr 7, 2026
@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


“liuruian” seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 7, 2026

Codecov Report

❌ Patch coverage is 0% with 1 line in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@043f2a1). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/model_executor/layers/normalization.py 0.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #7213   +/-   ##
==========================================
  Coverage           ?   73.55%           
==========================================
  Files              ?      383           
  Lines              ?    53513           
  Branches           ?     8378           
==========================================
  Hits               ?    39359           
  Misses             ?    11408           
  Partials           ?     2746           
Flag Coverage Δ
GPU 73.55% <0.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown

@fastdeploy-bot fastdeploy-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 AI Code Review | 2026-04-08

📋 Review 摘要

PR 概述:移除 QKRMSNorm 中的 step_use_cudagraph 条件,使 Triton qk_rmsnorm_fused kernel 在 Prefill 和 Decode 两个阶段都使用,以提升性能。

变更范围fastdeploy/model_executor/layers/normalization.py

影响面 Tag[Optimization] [OP]

📝 PR 规范检查

标题:✅ 符合规范,包含 [Optimization] Tag

Motivation:✅ 说明了优化目的和性能提升预期

Modifications:✅ 简要说明使用 QKRMSNorm 替代 paddle 散 Op

Checklist 问题

  • ❌ "Provide accuracy results" 未勾选,未提供精度测试结果
  • ❌ "Add unit tests" 未勾选

问题

级别 文件 概述
🟡 建议 normalization.py:344 建议补充 Prefill 阶段的大 batch 精度验证

总体评价

代码变更逻辑清晰,qk_rmsnorm_fused 已有单元测试验证。但考虑到 Prefill 阶段 batch size 通常远大于测试场景(测试最大 8192,实际可能更大),建议补充大 batch 下的精度对比测试,确保 Triton kernel 在 Prefill 场景的数值正确性。

proxy_rmsnorm=None,
) -> paddle.Tensor:
if proxy_rmsnorm is None and self.qk_norm_fused and forward_meta.step_use_cudagraph:
if proxy_rmsnorm is None and self.qk_norm_fused:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 建议补充 Prefill 阶段的精度验证结果。

虽然 qk_rmsnorm_fused 已有单元测试验证,但该测试主要针对小 batch size(如 128)。Prefill 阶段 batch size 通常更大(可能达 4096+),建议补充大 batch 下的精度对比测试,确保 Triton kernel 在 Prefill 场景下的数值正确性。

Copy link
Copy Markdown

@fastdeploy-bot fastdeploy-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 AI Code Review | 2026-04-08 12:09 CST

📋 Review 摘要

PR 概述:移除 QKRMSNorm.forward() 中的 forward_meta.step_use_cudagraph 条件,使 prefill 阶段也能使用 qk_rmsnorm_fused triton kernel

变更范围model_executor/layers/normalization.pytests/e2e/test_Qwen3VL_serving.py

影响面 Tag[Optimization] [OP]

📝 PR 规范检查

标题:✅ 符合规范,包含有效 Tag [Optimization]

描述⚠️ 部分问题

  • Motivation 和 Modifications 填写完整
  • Checklist 未勾选(特别是"Add unit tests"、"Provide accuracy results")
  • 缺少具体的性能 benchmark 数据(如 tests/operators/test_qk_rmsnorm_fused.py 中的 benchmark 结果)

建议标题:无需修改

建议描述补充

## Accuracy Tests

- [x] 使用 `tests/operators/test_qk_rmsnorm_fused.py` 验证 triton kernel 与 paddle 实现的数值一致性
- [ ] 提供 prefill 阶段使用前后的端到端准确性对比数据

## Benchmark Results

参考 `tests/operators/test_qk_rmsnorm_fused.py::TestQKNorm.test_qk_norm_result` 的测试结果:

问题

级别 文件 概述
🟡 建议 tests/e2e/test_Qwen3VL_serving.py:176 测试文件文本变更应与核心修改分离
🟡 建议 fastdeploy/model_executor/layers/normalization.py:344 缺少准确性测试结果
❓ 疑问 fastdeploy/model_executor/layers/normalization.py:344 历史限制原因未说明

总体评价

变更逻辑清晰,qk_rmsnorm_fused triton kernel 支持任意 token 数量,适用于 prefill 和 decode 阶段。但建议补充准确性测试数据和修改背景说明。


# base result
content2 = "视频中手机支架的颜色是黑色的。"
content2 = "视频中手机支架的颜色是黑色。"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 测试文件变更应与核心修改分离到单独 PR

测试文件中将预期结果从"黑色的"改为"黑色"是一个独立的文本格式调整(去掉助词"的"),与本次 PR 的核心优化(移除 step_use_cudagraph 条件)无关。

建议将此文本格式调整移到单独的 PR,保持 PR 的单一职责原则。

proxy_rmsnorm=None,
) -> paddle.Tensor:
if proxy_rmsnorm is None and self.qk_norm_fused and forward_meta.step_use_cudagraph:
if proxy_rmsnorm is None and self.qk_norm_fused:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 缺少准确性测试结果

PR 声称有性能提升("部分模型 单Kernel部分加速2~7倍"),但 Checklist 中未提供具体的准确性测试结果。

需要说明:

  1. prefill 阶段使用 triton kernel 与原来 paddle 实现的数值精度是否一致
  2. 是否有端到端测试对比修改前后的模型输出准确性

建议提供具体的 accuracy 测试数据,例如使用标准数据集(如 MMLU)的对比结果。

proxy_rmsnorm=None,
) -> paddle.Tensor:
if proxy_rmsnorm is None and self.qk_norm_fused and forward_meta.step_use_cudagraph:
if proxy_rmsnorm is None and self.qk_norm_fused:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❓ 疑问 历史限制原因未说明

根据 git 历史,提交 #6080fix opt qknorm)添加了 forward_meta.step_use_cudagraph 条件,限制了只在 decode 阶段使用 qk_rmsnorm_fused

PR 描述没有说明:

  1. 为什么 [BugFix] Fix qk_norm optimization #6080 要限制为 decode 阶段(是否有已知的 bug 或性能问题?)
  2. 现在移除限制是否安全,是否解决了 [BugFix] Fix qk_norm optimization #6080 当时遇到的问题

建议在 PR 描述中补充相关背景信息。

Copy link
Copy Markdown

@fastdeploy-bot fastdeploy-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 AI Code Review | 2026-04-08 16:43 CST

📋 Review 摘要

PR 概述:移除 QKRMSNorm 融合算子的 step_use_cudagraph 条件限制,使其在 Prefill 和 Decode 阶段都能使用 Triton 优化算子

变更范围model_executor/layers/normalization.pytests/e2e/test_Qwen3VL_serving.py

影响面 Tag[OP] [Optimization]

📝 PR 规范检查

PR 标题包含有效 [Optimization] 标签,Motivation 和 Modifications 已填写,符合规范。

问题

级别 文件 概述
🟡 建议 tests/e2e/test_Qwen3VL_serving.py:176 测试期望值变更原因未说明

总体评价

核心变更是移除 QKRMSNorm.forward() 中对 forward_meta.step_use_cudagraph 的检查,使得 Triton 融合算子在 Prefill 阶段也能使用。从代码逻辑看:

  1. 正确性qk_rmsnorm_fused 是独立的 Triton kernel,不依赖 CUDA Graph,移除此条件是安全的
  2. 兼容性:调用方 qwen3.py 传递 forward_meta,而 glm4_moe.py 不传递 forward_meta,两者都能正常工作
  3. 性能:根据 PR 描述,该变更可带来 2-7 倍的 Kernel 加速,对 Prefill 阶段效果显著

主要建议是说明测试期望值变更的原因。


# base result
content2 = "视频中手机支架的颜色是黑色的。"
content2 = "视频中手机支架的颜色是黑色。"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 请说明测试期望值变更的原因。

修改从 "黑色。""黑色",请问是因为使用融合算子后模型输出发生了变化,还是仅仅是文本调整?如果模型输出确实有变化,建议确认这是否符合预期,并考虑增加对应的精度验证测试。

@zhoutianzi666 zhoutianzi666 merged commit 870dbac into PaddlePaddle:develop Apr 10, 2026
34 of 38 checks passed
EmmonsCurse pushed a commit to EmmonsCurse/FastDeploy that referenced this pull request Apr 10, 2026
Co-authored-by: “liuruian” <liuruian@baidu.com>
@EmmonsCurse
Copy link
Copy Markdown
Collaborator

✅ Cherry-pick successful! Created PR: #7305

EmmonsCurse pushed a commit to EmmonsCurse/FastDeploy that referenced this pull request Apr 10, 2026
Co-authored-by: “liuruian” <liuruian@baidu.com>
@EmmonsCurse
Copy link
Copy Markdown
Collaborator

✅ Cherry-pick successful! Created PR: #7306

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants