Skip to content

Fix(rng): use fixed seed for deterministic LoRA init#141

Merged
kilinchange merged 1 commit intomasterfrom
lora_loss
Apr 10, 2026
Merged

Fix(rng): use fixed seed for deterministic LoRA init#141
kilinchange merged 1 commit intomasterfrom
lora_loss

Conversation

@chen2021673
Copy link
Copy Markdown
Contributor

Replace std::random_device with 42 + omp_get_thread_num() to ensure reproducible LoRA initialization across runs.

nn::lora::PrintLoRASummary(model, rank.GlobalRank());
}

model->To(device);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个挪位置是因为 normal 只有 cpu 实现,所以要在 cpu 上先初始化吗?但为什么之前不需要呢?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CPU RNG 和 GPU RNG 是两套状态,而且采样设备也不一样,所以这两条路径下 LoRA 初值不同。我们固定的是 CPU RNG,所以要把 lora 初始化放到 CPU 上做

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

但我们现在 normal 的实现是先在 cpu 生成,然后拷贝到 gpu 上(如果输入是 gpu tensor 的话),所以理论上这个的位置对于现在是不会有影响的吧?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

确实,实验了一下不影响。但我觉得这个改动可以留着,语义上都在CPU初始化也挺合理

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里理论上我们应该补齐 cuda normal 实现,所以 cpu/cuda 初始化都是可以的,我建议这个 pr 先不引入不必要改动,可以等之后抽象出 Generator 抽象后再看下怎么放更合适

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Replace std::random_device with 42 + omp_get_thread_num() to ensure reproducible LoRA initialization across runs.
@kilinchange kilinchange merged commit c784bf1 into master Apr 10, 2026
2 checks passed
@kilinchange kilinchange deleted the lora_loss branch April 10, 2026 02:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants