-
Notifications
You must be signed in to change notification settings - Fork 27
Fix benchmarking CI job #154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
fe55f78 to
92fdcec
Compare
|
Accidentally, I added a commit to this PR that I intended to add to my other PR, so I revoked it with a forced push. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #154 +/- ##
==========================================
- Coverage 94.91% 90.86% -4.05%
==========================================
Files 78 77 -1
Lines 2477 2376 -101
==========================================
- Hits 2351 2159 -192
- Misses 126 217 +91 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
I reviewed the logs of the 3 failed actions, and here are my findings:
I also opened this PR on my fork, and all actions completed successfully. Here is a sample of the output of AirspeedVelocity.jl in this comment from the PR on my fork: hakkelt#3 (comment) Anyway, @lostella, what do you think about this package/solution? I could not find out what the output of the previous solution looked like because the very old GitHub Actions runs were deleted, and all the more recent runs failed. |
|
@hakkelt AirspeedVelocity is interesting, after a quick look. I will check out this branch locally and try it to get a feeling for how it works. But assuming that it's fine, I would be ok with the change. What's important is that benchmarks run correctly in GH workflows in some way, and that regressions are spotted. I restarted the 1.9 macOS tests, which appear to have succeeded now. I guess one thing we should consider (maybe in a separate PR) is to bump Julia versions to 1 + LTS in the test workflow? I believe the codecov stuff we could remove. (Also this, separately) |
|
The benchmark report comment looks great btw |
|
Yes, I agree that replacing Julia 1.9 in tests with LTS is a good idea. It's pretty unlikely that there are any Julia 1.9 users who want to use the latest ProximalOperators version, but who knows? 😄 On the other hand, there is also a slight chance that tests pass with Julia 1.9, but fail with 1.10... It's possible to test on 1.9, LTS, and the latest, but I think it's not worth wasting electricity on that, and it's probably better to focus on LTS than support outdated Julia versions. I'm not sure what you mean by removing codecov stuff. Btw, I checked the report on the CodeCov site and realized that only a single coverage report was uploaded for this PR, whereas the last merged PR had 6 reports uploaded, corresponding to the 6 test jobs. I didn't find any warnings or notices anywhere that tell why the reports of the otherwise successful test weren't uploaded. So, I think it is just a spurious regression on code coverage statistics that hopefully gets fixed automatically for the next PR. |
When trying to fix the failing benchmarking CI job, I found AirspeedVelocity.jl. I tried running the benchmarks with it, and I found it more intuitive, both when running benchmarks locally from the CLI and when writing the configuration for the GitHub action. Additionally, it prints the benchmarking results into a comment. This package also removes the need for the custom benchmark-running script
runbenchmarks.jl.