Fix experiment-dataset linking when running evals with a dataset#103
Merged
delner merged 1 commit intobraintrustdata:mainfrom Feb 17, 2026
Merged
Conversation
delner
approved these changes
Feb 17, 2026
Collaborator
delner
left a comment
There was a problem hiding this comment.
Looks good from what I can see! The version lookup is a bit quirky but I think we'll need to rework how datasets are constructed soon anyways (outside the scope of this PR.)
I'll enable the workflow, and if everything is good in CI, we'll merge.
Collaborator
de43aec to
4254b82
Compare
Collaborator
|
This was released in v0.1.4. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The issue
When running evals with a dataset via
Braintrust::Eval.run, the resulting experiment is not linked to the dataset in the Braintrust UI. The UI shows "Rows not attached to a dataset" becausedataset_idanddataset_versionare never included in the experiment creation request:The issue is that
Eval.resolve_datasetresolved a dataset to an array of cases but discarded theDatasetobject, sodataset_obj.idwas never captured.Experiments#createdid not accept or senddataset_idanddataset_versionin thePOST /v1/experimentpayload, even though the API supports both fields.Additionally,
Dataset#versionreturns nil when the dataset is not explicitly pinned to a version. The Python SDK handles this by computing the version asmax(_xact_id)across all records in the fetched dataset, but the Ruby SDK does not.Fix
Eval.resolve_datasetnow returns a hash with:cases,:dataset_id, and:dataset_versioninstead of a plain array. When no pinned version is available, it computesdataset_versionfrommax(_xact_id)across fetched records (matching the Python SDK behavior).Eval.runretrievesdataset_idanddataset_versionfrom the resolved dataset and forwards them to the experiment-creation process.Experiments#createaccepts optionaldataset_idanddataset_versionkeyword arguments and includes them in the API payload when present.After the fix got applied:
Tests
Added assertion to the existing dataset eval test verifying that
dataset_idanddataset_versionare sent in thePOST /v1/experimentrequest body.Added
test_eval_run_without_dataset_does_not_send_dataset_fieldsto verify thatdataset_idanddataset_versionarenilwhen no dataset is provided.