Skip to content

hir-ty: add incremental benchmarks#19934

Closed
davidbarsky wants to merge 1 commit intorust-lang:masterfrom
davidbarsky:davidbarsky/add-some-incremental-benchmarks
Closed

hir-ty: add incremental benchmarks#19934
davidbarsky wants to merge 1 commit intorust-lang:masterfrom
davidbarsky:davidbarsky/add-some-incremental-benchmarks

Conversation

@davidbarsky
Copy link
Contributor

We didn't have these before, but we probably should. The only benchmarks here are for trait solving, but I can extend this to infer as well.

(Builds atop of #19914.)

@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Jun 5, 2025
@davidbarsky davidbarsky requested a review from Veykril June 5, 2025 18:34
Copy link
Contributor

@ChayimFriedman2 ChayimFriedman2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm quite confused. What is the point of those benchmarks? If we want to test incrementality, we should check salsa's logs. But it looks like you want to measure raw speed, why?

signatures::VariantFields,
};

pub use crate::db::DefDatabase;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The db module is public, no need for this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whoops, my bad.


#[salsa_macros::db]
#[derive(Clone)]
pub(crate) struct BenchmarkDB {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we need to do something with all the boilerplate with declaring a db.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we reuse the TestDB? Why do we need a new type

fn benchmark_incremental_struct_addition(c: &mut Criterion) {
c.bench_function("incremental_struct_addition", |b| {
b.iter_batched(
|| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid the cost of the setup will dwarf the cost of the benchmarked computation, not allowing criterion to accurately measure the time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first closure, thankfully, is setup for the second closure, which is the actually benchmarked computation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that, what I meant is that the setup is too costly relative to the actual computation, and it runs on every iteration, so it can make the benchmarking inaccurate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I’m sorry: I misunderstood. I have noticed this is somewhat noisy, but I chalked it up to my work-managed computer being bad.

@davidbarsky
Copy link
Contributor Author

I'm quite confused. What is the point of those benchmarks? If we want to test incrementality, we should check salsa's logs. But it looks like you want to measure raw speed, why?

On Salsa, we've had success with http://codspeed.io/ and I wanted to set that up proper, useful benchmarks that measure incremental performance. The examples I have here are nowhere near sufficient to do reasonable performance measurements, but this contains some useful bones.

@davidbarsky davidbarsky force-pushed the davidbarsky/add-some-incremental-benchmarks branch from d8b8ad1 to b926d5d Compare June 5, 2025 19:51
Copy link
Member

@Veykril Veykril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the source snippets here are likely way too small/simple to give us any interesting numbers

@davidbarsky
Copy link
Contributor Author

I think the source snippets here are likely way too small/simple to give us any interesting numbers

Didn't see this, sorry! At which size do you think the snippet would be sufficiently large? I imagine we'd want to keep it at one file, which would involve inlining dependencies if we're to use anything existing...

@Veykril
Copy link
Member

Veykril commented Jul 4, 2025

Note sure, it doesn't need to be massive I think, but it likely wants to at least make use of some general type system things like coercion, autoderef, method lookups with multiple candidates etc. Just arithmetic on primitives doesn't do much, especially given that those traits on primitives have short circuiting logic iirc

@Veykril
Copy link
Member

Veykril commented Jul 4, 2025

In fact, it would probably make more sense to just have these live in the main r-a crate as benchmarks that properly load up a small project?

@rustbot
Copy link
Collaborator

rustbot commented Mar 12, 2026

☔ The latest upstream changes (possibly #21804) made this pull request unmergeable. Please resolve the merge conflicts.

@Veykril Veykril closed this Mar 12, 2026
@rustbot rustbot removed the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Mar 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants