[WIP/ENH] Finally, batched bundle recognition#187
Open
36000 wants to merge 1 commit intotractometry:mainfrom
Open
[WIP/ENH] Finally, batched bundle recognition#18736000 wants to merge 1 commit intotractometry:mainfrom
36000 wants to merge 1 commit intotractometry:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This PR refactors pyAFQ’s bundle recognition pipeline to support batched (chunked) recognition—intended to bound peak memory while segmenting very large tractograms (e.g., TRX) by running “chunk-local” criteria per chunk and then merging survivors for a “global” phase.
Changes:
- Introduces a chunked recognition flow (
recognize_bundles) with per-chunk filtering and a merged global phase. - Replaces the prior
immlib-based preprocessing plan with a lightweightPreprocPlanusing cached properties. - Adjusts TRX handling to sometimes pass a TRX path through segmentation/recognition, plus a few error/logging message updates.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
AFQ/tasks/tractography.py |
Updates the “no streamlines generated” error message text. |
AFQ/tasks/segmentation.py |
Adjusts TRX loading behavior to optionally pass a TRX path through to recognition. |
AFQ/recognition/utils.py |
Updates recognition utilities (logging → tqdm.write) and adds chunk-survivor export/merge helpers. |
AFQ/recognition/recognize.py |
Routes recognition through the new batched recognition entrypoint and adds chunk_size. |
AFQ/recognition/preprocess.py |
Replaces the previous preprocessing-plan machinery with PreprocPlan. |
AFQ/recognition/criteria.py |
Major refactor: splits criteria into chunk-local/global phases and implements recognize_bundles chunking/merge logic. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| new_copy.roi_closest = self.roi_closest.copy() | ||
| if hasattr(self, "roi_dists"): | ||
| new_copy.roi_dists = self.roi_dists.copy() | ||
| self.roi_dists = self.roi_dists.copy() |
| else: | ||
| tg_path = None | ||
|
|
||
| n_streamlines = len(tg) |
Comment on lines
+709
to
+712
| if tg_path is not None: | ||
| tg = load_trx(tg_path, img) | ||
|
|
||
| for bundle_name in bundle_dict.bundle_names: |
| ) | ||
| f"Bundle {potential_criterion} is being used as a criterion in " | ||
| f"the definition of bundle {bundle_name}, however this bundle " | ||
| "was not found. This could because of insufficient streamlines" |
Comment on lines
+211
to
+213
| tg = load_trx(tg, img) | ||
|
|
||
| n_streamlines = len(tg) |
Comment on lines
64
to
70
| elif streamlines.endswith(".trx"): | ||
| is_trx = True | ||
| tg = load_trx(streamlines, data_imap["dwi"]) | ||
| if segmentation_params["nb_streamlines"] or segmentation_params["nb_points"]: | ||
| tg = load_trx(streamlines, data_imap["dwi"]) | ||
| else: | ||
| tg = streamlines | ||
| elif streamlines.endswith(".tck.gz"): |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Expect to segment ~100M streamlines in ~1 hour on your PC