Description
When using SparkRenderer with paged LOD splats in an on-demand rendering architecture (render only when explicitly requested, no continuous render loop), the onDirty callback chain has gaps that cause splats to not fully load.
Reproduction
- Create a
SparkRenderer with preUpdate: true and an onDirty callback that schedules a new render frame
- Add a
SplatMesh with { paged: new PagedSplats({ rootUrl }), lod: true }
- Trigger a single render (the camera is static after this)
- Observe: the splat sometimes loads fully, sometimes stops partway through
Workaround
We currently poll invalidateScene() every 200ms while pager.fetchers, pager.fetched, pager.newUploads, or pager.lodTreeUpdates are non-empty.
Potential Root Cause Analysis (AI-assisted, via Claude)
We used Claude to trace through the compiled SparkRenderer source and it identified the following potential explanation. This is AI-assisted analysis — it may not be fully accurate, but we're sharing it in case it's helpful.
setDirty() appears to be called in only 3 places inside SparkRenderer:
- After
generate() in updateInternal — but only when doUpdate = true. When the camera is static and the version hasn't changed, doUpdate = false and this is skipped.
- After sort completes in
driveSort — but driveSort returns immediately when sortDirty = false, which happens when doUpdate = false (since sortDirty = true is only set inside the doUpdate branch).
- After
updateLodInstances in driveLod — but only when tryExclusive succeeds (LOD worker is free) AND lodDirty = true.
Claude's hypothesis is that the chain breaks when all three conditions fail simultaneously:
- Camera is static →
doUpdate = false → no setDirty from (1) or (2)
- LOD worker is busy processing a previous batch →
tryExclusive returns null → no setDirty from (3)
Meanwhile, chunks are still being downloaded by the pager's fetchers in the background. When they complete, nothing calls setDirty(), so no new render is scheduled to process them.
The chain eventually resumes when the LOD worker finishes its current batch (calling setDirty at the end of tryExclusive), but the gap can be significant — especially during initial load when initLodTree involves network I/O for the .rad metadata.
Additionally, consumeLodTreeUpdates() may return empty between LOD traversals even though chunks are still in flight. If lodDirty is false at that point, updateLodInstances is skipped and setDirty is never called.
Suggested Fix (AI-assisted, via Claude)
Consider calling setDirty() when paged chunk fetches complete (e.g., in processFetched or after driveFetchers starts new downloads), so that on-demand renderers are notified that new data is available for processing.
Description
When using
SparkRendererwith paged LOD splats in an on-demand rendering architecture (render only when explicitly requested, no continuous render loop), theonDirtycallback chain has gaps that cause splats to not fully load.Reproduction
SparkRendererwithpreUpdate: trueand anonDirtycallback that schedules a new render frameSplatMeshwith{ paged: new PagedSplats({ rootUrl }), lod: true }Workaround
We currently poll
invalidateScene()every 200ms whilepager.fetchers,pager.fetched,pager.newUploads, orpager.lodTreeUpdatesare non-empty.Potential Root Cause Analysis (AI-assisted, via Claude)
We used Claude to trace through the compiled SparkRenderer source and it identified the following potential explanation. This is AI-assisted analysis — it may not be fully accurate, but we're sharing it in case it's helpful.
setDirty()appears to be called in only 3 places insideSparkRenderer:generate()inupdateInternal— but only whendoUpdate = true. When the camera is static and the version hasn't changed,doUpdate = falseand this is skipped.driveSort— butdriveSortreturns immediately whensortDirty = false, which happens whendoUpdate = false(sincesortDirty = trueis only set inside thedoUpdatebranch).updateLodInstancesindriveLod— but only whentryExclusivesucceeds (LOD worker is free) ANDlodDirty = true.Claude's hypothesis is that the chain breaks when all three conditions fail simultaneously:
doUpdate = false→ nosetDirtyfrom (1) or (2)tryExclusivereturns null → nosetDirtyfrom (3)Meanwhile, chunks are still being downloaded by the pager's fetchers in the background. When they complete, nothing calls
setDirty(), so no new render is scheduled to process them.The chain eventually resumes when the LOD worker finishes its current batch (calling
setDirtyat the end oftryExclusive), but the gap can be significant — especially during initial load wheninitLodTreeinvolves network I/O for the.radmetadata.Additionally,
consumeLodTreeUpdates()may return empty between LOD traversals even though chunks are still in flight. IflodDirtyis false at that point,updateLodInstancesis skipped andsetDirtyis never called.Suggested Fix (AI-assisted, via Claude)
Consider calling
setDirty()when paged chunk fetches complete (e.g., inprocessFetchedor afterdriveFetchersstarts new downloads), so that on-demand renderers are notified that new data is available for processing.