feat: LEP-5 availability commitment support (Merkle proof challenge)#274
feat: LEP-5 availability commitment support (Merkle proof challenge)#274mateeullahmalik wants to merge 1 commit intomasterfrom
Conversation
613a4e6 to
a79249d
Compare
Re-reviewed after efeba07. Two previous issues (HashLeaf domain separation and unconditional BuildCommitmentFromFile) are fixed. Two remain, and one new issue found:
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
pkg/cascadekit/commitment.go
Outdated
| buf[len(root)+2] = byte(counter >> 8) | ||
| buf[len(root)+3] = byte(counter) | ||
|
|
||
| h := merkle.HashLeaf(counter, buf) // reuse BLAKE3 — domain doesn't matter here |
There was a problem hiding this comment.
merkle.HashLeaf(counter, buf) is a leaf-hashing function that includes domain separation (typically a 0x00 prefix byte and the leaf index). The comment says "domain doesn't matter here" but it does: if the chain-side derives challenge indices using raw BLAKE3 (without the leaf domain prefix), the SDK and chain will compute different indices, causing proof verification to fail. The counter is also effectively included twice -- once as the HashLeaf index parameter and once in the big-endian bytes already appended to buf. Consider using a plain BLAKE3 hash of buf instead, or confirm the chain uses the identical HashLeaf call for index derivation.
Fix it with Roo Code or mention @roomote and request a fix.
sdk/action/client.go
Outdated
| commitment, _, err := cascadekit.BuildCommitmentFromFile(filePath, challengeCount, minChunks) | ||
| if err != nil { | ||
| return actiontypes.CascadeMetadata{}, "", "", fmt.Errorf("build availability commitment: %w", err) | ||
| } |
There was a problem hiding this comment.
BuildCommitmentFromFile is called unconditionally here, but it returns an error for files smaller than MinTotalSize (4 bytes) at commitment.go:89. This means BuildCascadeMetadataFromFile will now fail for any file under 4 bytes, which is a regression from the pre-LEP-5 behavior. The PR description states backward compatibility ("Pre-LEP-5 actions follow the existing flow"), but the supernode side gates on AvailabilityCommitment != nil while the SDK side never produces a nil commitment. Consider catching the "file too small" error and falling back to commitment = nil so small files can still be registered without LEP-5.
Fix it with Roo Code or mention @roomote and request a fix.
| for uint32(len(indices)) < m { | ||
| // BLAKE3(root || uint32be(counter)) | ||
| buf := make([]byte, len(root)+4) | ||
| copy(buf, root) | ||
| buf[len(root)] = byte(counter >> 24) | ||
| buf[len(root)+1] = byte(counter >> 16) | ||
| buf[len(root)+2] = byte(counter >> 8) | ||
| buf[len(root)+3] = byte(counter) | ||
|
|
||
| h := merkle.HashLeaf(counter, buf) // reuse BLAKE3 — domain doesn't matter here | ||
| // Use first 8 bytes as uint64 mod numChunks | ||
| val := uint64(h[0])<<56 | uint64(h[1])<<48 | uint64(h[2])<<40 | uint64(h[3])<<32 | | ||
| uint64(h[4])<<24 | uint64(h[5])<<16 | uint64(h[6])<<8 | uint64(h[7]) | ||
| idx := uint32(val % uint64(numChunks)) | ||
|
|
||
| if _, exists := used[idx]; !exists { | ||
| used[idx] = struct{}{} | ||
| indices = append(indices, idx) | ||
| } | ||
| counter++ |
There was a problem hiding this comment.
When m == numChunks (which happens when challengeCount >= numChunks, e.g. 8 challenges on a file that produces 4-8 chunks), this loop must discover every index in [0, numChunks) through random sampling. With small numChunks, the last few indices become increasingly unlikely to hit, and counter has no upper bound. While it will terminate probabilistically, adding a safety cap (e.g. counter > m * 100) with an error return would prevent a potential hang in degenerate cases.
Fix it with Roo Code or mention @roomote and request a fix.
| for uint32(len(indices)) < m { | ||
| // BLAKE3(root || uint32be(counter)) | ||
| buf := make([]byte, len(root)+4) | ||
| copy(buf, root) | ||
| buf[len(root)] = byte(counter >> 24) | ||
| buf[len(root)+1] = byte(counter >> 16) | ||
| buf[len(root)+2] = byte(counter >> 8) | ||
| buf[len(root)+3] = byte(counter) |
There was a problem hiding this comment.
Minor: buf is re-allocated every iteration but its length (len(root)+4) is constant. Moving the allocation before the loop and just overwriting the counter bytes each iteration avoids unnecessary GC pressure.
| for uint32(len(indices)) < m { | |
| // BLAKE3(root || uint32be(counter)) | |
| buf := make([]byte, len(root)+4) | |
| copy(buf, root) | |
| buf[len(root)] = byte(counter >> 24) | |
| buf[len(root)+1] = byte(counter >> 16) | |
| buf[len(root)+2] = byte(counter >> 8) | |
| buf[len(root)+3] = byte(counter) | |
| buf := make([]byte, len(root)+4) | |
| copy(buf, root) | |
| for uint32(len(indices)) < m { | |
| // BLAKE3(root || uint32be(counter)) | |
| buf[len(root)] = byte(counter >> 24) | |
| buf[len(root)+1] = byte(counter >> 16) | |
| buf[len(root)+2] = byte(counter >> 8) | |
| buf[len(root)+3] = byte(counter) |
Fix it with Roo Code or mention @roomote and request a fix.
9ff4779 to
efeba07
Compare
SDK Client (BuildCascadeMetadataFromFile): - Build Merkle tree from file chunks using chain SVC params - Generate challenge indices and AvailabilityCommitment - Attach commitment to CascadeMetadata at registration Supernode Server (Register): - After data hash verification, verify Merkle root against on-chain commitment - Generate chunk proofs for challenged indices - Pass proofs through SimulateFinalizeAction and FinalizeAction Lumera Client Layer: - Thread ChunkProofs through FinalizeCascadeAction and SimulateFinalizeCascadeAction - Include chunk_proofs in finalize metadata JSON - Update interface, impl, helpers, mocks, test fakes New: pkg/cascadekit/commitment.go - BuildCommitmentFromFile: chunk file, build tree, derive indices - VerifyCommitmentRoot: rebuild tree and verify against on-chain root - GenerateChunkProofs: produce Merkle proofs for challenge indices - SelectChunkSize: adaptive chunk sizing per LEP-5 spec go.mod: enable local lumera replace for PR-103 compatibility
| return nil, fmt.Errorf("build merkle tree for verification: %w", err) | ||
| } | ||
|
|
||
| if tree.Root != [merkle.HashSize]byte(commitment.Root) { |
There was a problem hiding this comment.
The slice-to-array conversion [merkle.HashSize]byte(commitment.Root) panics if commitment.Root is not exactly 32 bytes. While the chain validates root length at registration, the supernode deserializes this from a protobuf message received over the network -- a corrupted or truncated payload could produce a root of unexpected length. Adding a length guard before the conversion would prevent a runtime panic.
| if tree.Root != [merkle.HashSize]byte(commitment.Root) { | |
| if len(commitment.Root) != merkle.HashSize { | |
| return nil, fmt.Errorf("invalid root length: got %d, expected %d", len(commitment.Root), merkle.HashSize) | |
| } | |
| if tree.Root != [merkle.HashSize]byte(commitment.Root) { |
Fix it with Roo Code or mention @roomote and request a fix.
| return nil, fmt.Errorf("build merkle tree for verification: %w", err) | ||
| } | ||
|
|
||
| if tree.Root != [merkle.HashSize]byte(commitment.Root) { |
There was a problem hiding this comment.
[merkle.HashSize]byte(commitment.Root) will panic at runtime if commitment.Root is not exactly 32 bytes. Since commitment.Root comes from on-chain protobuf deserialization, a malformed or truncated value would crash the supernode. The chain-side equivalent (bytesToMerkleHash in svc.go) validates the length before converting. Consider adding a length check here, or extracting a helper similar to the chain's approach.
| if tree.Root != [merkle.HashSize]byte(commitment.Root) { | |
| var expectedRoot [merkle.HashSize]byte | |
| if len(commitment.Root) != merkle.HashSize { | |
| return nil, fmt.Errorf("invalid commitment root length: got %d, expected %d", len(commitment.Root), merkle.HashSize) | |
| } | |
| copy(expectedRoot[:], commitment.Root) | |
| if tree.Root != expectedRoot { |
Fix it with Roo Code or mention @roomote and request a fix.
What
Adds LEP-5 availability commitment support. When a client registers a cascade action, the SDK now builds a Merkle tree over the file's chunks and submits the root + challenge indices as part of the on-chain metadata. When the supernode receives the file for processing, it independently verifies the Merkle root, generates inclusion proofs for the challenged chunks, and includes those proofs in the finalize transaction.
Why
LEP-5 introduces Storage Verification Challenges (SVC). The chain needs to verify that supernodes actually store the data they claim to store. The availability commitment (Merkle root) is submitted at registration time, and chunk proofs are submitted at finalization — the chain then verifies the proofs against the stored root. This is the foundation for on-chain storage accountability.
How it works
SDK side (client registration):
svc_challenge_countandsvc_min_chunks_for_challengefrom chain paramsAvailabilityCommitmentto cascade metadataSupernode side (file processing):
Backward compatibility:
AvailabilityCommitment != nilnilproofs gracefullyChain dependency
Depends on lumera#103 which adds the SVC params, commitment validation, and proof verification to the chain. Currently pinned to a pseudo-version (
v1.11.1-0.20260308102614-4d4f1ce3f65e) — will update to a tagged release once lumera #103 is merged.Testing