Skip to content

Conversation

@alpe
Copy link
Contributor

@alpe alpe commented Jan 6, 2026

Replaces #2836

alpe added 28 commits November 12, 2025 15:16
* main:
  fix: remove duplicate error logging in light node shutdown (#2841)
  chore: fix incorrect function name in comment (#2840)
  chore: remove sequencer go.mod (#2837)
* main:
  build(deps): Bump the go_modules group across 2 directories with 3 updates (#2846)
  build(deps): Bump github.com/dvsekhvalnov/jose2go from 1.7.0 to 1.8.0 in /test/e2e (#2851)
  build(deps): Bump github.com/consensys/gnark-crypto from 0.18.0 to 0.18.1 in /test/e2e (#2844)
  build(deps): Bump github.com/cometbft/cometbft from 0.38.17 to 0.38.19 in /test/e2e (#2843)
  build(deps): Bump github.com/dvsekhvalnov/jose2go from 1.6.0 to 1.7.0 in /test/e2e (#2845)
(cherry picked from commit c44cd77e665f6d5d463295c6ed61c59a56d88db3)
* main:
  chore: reduce log noise (#2864)
  fix: sync service for non zero height starts with empty store (#2834)
  build(deps): Bump golang.org/x/crypto from 0.43.0 to 0.45.0 in /execution/evm (#2861)
  chore: minor improvement for docs (#2862)
* main:
  chore: bump da (#2866)
  chore: bump  core (#2865)
* main:
  chore: fix some comments (#2874)
  chore: bump node in evm-single (#2875)
  refactor(syncer,cache): use compare and swap loop and add comments (#2873)
  refactor: use state da height as well (#2872)
  refactor: retrieve highest da height in cache (#2870)
  chore: change from event count to start and end height (#2871)
* main:
  chore: remove extra github action yml file (#2882)
  fix(execution/evm): verify payload status (#2863)
  feat: fetch included da height from store (#2880)
  chore: better output on errors (#2879)
  refactor!: create da client and split cache interface (#2878)
  chore!: rename `evm-single` and `grpc-single` (#2839)
  build(deps): Bump golang.org/x/crypto from 0.42.0 to 0.45.0 in /tools/da-debug in the go_modules group across 1 directory (#2876)
  chore: parallel cache de/serialization (#2868)
  chore: bump blob size (#2877)
* main:
  build(deps): Bump mdast-util-to-hast from 13.2.0 to 13.2.1 in /docs in the npm_and_yarn group across 1 directory (#2900)
  refactor(block): centralize timeout in client (#2903)
  build(deps): Bump the all-go group across 2 directories with 3 updates (#2898)
  chore: bump default timeout (#2902)
  fix: revert default db (#2897)
  refactor: remove obsolete // +build tag (#2899)
  fix:da visualiser namespace  (#2895)
  refactor: omit unnecessary reassignment (#2892)
  build(deps): Bump the all-go group across 5 directories with 6 updates (#2881)
  chore: fix inconsistent method name in retryWithBackoffOnPayloadStatus comment (#2889)
  fix: ensure consistent network ID usage in P2P subscriber (#2884)
  build(deps): Bump golangci/golangci-lint-action from 9.0.0 to 9.1.0 (#2885)
  build(deps): Bump actions/checkout from 5 to 6 (#2886)
* main: (34 commits)
  feat: make reaper poll duration configurable (#2951)
  chore!: move sequencers to pkg (#2931)
  feat: Ensure Header integrity on DA (#2948)
  feat(testda): add header support with GetHeaderByHeight method (#2946)
  chore: improve code comments clarity (#2947)
  chore(sequencers): optimize store check (#2945)
  fix: make evm_execution more robust (#2942)
  fix(sequencers/single): deterministic queue (#2938)
  fix(block): fix init logic sequencer for da epoch fetching (#2926)
  feat: use DA timestamp (#2939)
  chore: improve code comments clarity (#2943)
  build(deps): bump libp2p (#2937)
  build(deps): Bump actions/cache from 4 to 5 (#2934)
  build(deps): Bump actions/download-artifact from 6 to 7 (#2933)
  build(deps): Bump actions/upload-artifact from 5 to 6 (#2932)
  feat: DA Client remove interface part 3, replace types with new code (#2910)
  DA Client remove interface: Part 2.5, create e2e test to validate that a blob is posted in DA layer. (#2920)
  fix(syncing): skip forced txs checks for p2p blocks (#2922)
  build(deps): Bump the all-go group across 5 directories with 5 updates (#2919)
  chore: loosen syncer state check (#2927)
  ...
@github-actions
Copy link
Contributor

github-actions bot commented Jan 6, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedJan 12, 2026, 1:03 PM

@claude
Copy link
Contributor

claude bot commented Jan 6, 2026

Claude finished @alpe's task in 5m 20s —— View job


Code Review: High Availability via RAFT ✅

I've completed a comprehensive review of this RAFT high availability implementation. This is a substantial and well-architected change that introduces dynamic leader election and state replication. Below are my findings organized by severity and category.


Critical Issues

1. Bootstrap Mode Restriction Blocks Non-Bootstrap Nodes

Location: pkg/raft/node.go:112-115

The current logic prevents any node from starting if Bootstrap is false:

if !n.config.Bootstrap {
    return fmt.Errorf("raft cluster requires bootstrap mode")
}

Issue: This blocks legitimate follower nodes from joining an existing cluster. Only the first node should bootstrap; subsequent nodes should join the existing cluster without bootstrapping.

Impact: Prevents horizontal scaling - you can only run a single-node cluster.

Recommendation:

  • Allow non-bootstrap nodes to start without attempting bootstrap
  • Only execute bootstrap logic when n.config.Bootstrap == true
  • Document the bootstrap vs join workflow clearly

Fix this →

2. Naked Returns Swallow Errors

Locations: node/full.go:113-115 and node/full.go:117-119

if leaderElection, err = newSingleRoleElector(leaderFactory); err != nil {
    return  // naked return here - loses error!
}

Issue: The naked return statements will return (nil, nil) instead of propagating the error, masking initialization failures.

Impact: Silent failures during node startup could lead to runtime panics or undefined behavior.

Recommendation: Return the error explicitly:

if leaderElection, err = newSingleRoleElector(leaderFactory); err != nil {
    return nil, err
}

Fix this →


High Priority Issues

3. Race Condition in Leadership Transition

Location: pkg/raft/election.go:98-126

During the transition from follower to leader, there's a critical window where:

  1. waitForMsgsLanded() is called to sync Raft state
  2. Leadership is checked again (d.node.leaderID() != d.node.NodeID())
  3. If still leader, proceed with recovery/transition

Issue: Between the leadership check and actually starting leader operations, the node could lose leadership again, potentially leading to:

  • Double-signing if two nodes think they're leader
  • State divergence if partial operations complete

Mitigation Already Present: The code does check leadership again after waiting, which is good. However, consider:

  • Adding a mechanism to hold a "leadership lease" during transition
  • Logging more granular state transitions for debugging
  • Consider using a state machine pattern for clearer transition semantics

Current State: The implementation has reasonable safeguards, but this is an inherently tricky area. Document the race conditions and mitigation strategy clearly.

4. Context Cancellation Handling in Raft Apply

Location: pkg/raft/node.go:281-303

The FSM.Apply() method is called by the Raft library and must not fail or block. The current implementation:

  • Sends to applyCh with a non-blocking send (good)
  • Logs a warning if channel is full (good)

Concern: If the apply channel consumer is slow or blocked, state updates will be silently dropped. This could cause:

  • Followers missing state updates
  • Nodes becoming out of sync with the Raft cluster

Recommendation:

  • Monitor the "apply channel full" warning metric in production
  • Consider increasing the channel buffer size (currently 1 in raft_retriever.go:62)
  • Document that followers can always sync from DA or other peers if they miss Raft updates

5. Missing Error Context in Block Broadcast

Location: block/internal/executing/executor.go:474-476

if err := e.raftNode.Broadcast(e.ctx, raftState); err != nil {
    return fmt.Errorf("failed to propose block to raft: %w", err)
}

Issue: If Raft broadcast fails, the block has already been committed to local storage (line 453) but hasn't been replicated to the cluster.

Impact:

  • Node becomes out of sync with Raft cluster
  • Could lead to state divergence
  • Next leadership transition will require rollback

Current Mitigation: The code has recovery logic (RecoverFromRaft) that handles this. However:

  • Consider whether the broadcast should happen BEFORE committing to local storage
  • Or document clearly that local state may be ahead of Raft state and will be rolled back during recovery
  • Add more context to the error (include height, hash)

Medium Priority Issues

6. Inconsistent Naming: NodeId vs NodeID

Location: pkg/rpc/server/http.go:143

Go convention uses NodeID (all caps for initialisms), but the struct has NodeId:

NodeId string `json:"node_id"`  // Should be NodeID

Recommendation: Rename to NodeID for consistency with Go conventions.

7. Debug Log at Info Level

Location: block/internal/syncing/syncer.go:184

s.logger.Debug().Uint64("header_height", state.LastSubmittedDAHeaderHeight)...

The log message has +++ prefix suggesting it's debug code:

.Msg("received raft block state")  // Previous versions had "+++" prefix

Recommendation: Ensure this is truly at the appropriate log level for production.

8. Method Name Clarity

Location: block/internal/cache/pending_headers.go:69-71

Method GetLastSubmittedDataHeight() is defined on PendingHeaders struct, which is confusing:

func (ph *PendingHeaders) GetLastSubmittedDataHeight() uint64

Recommendation: Either rename the method or document why headers track data height.

9. Redundant Type Conversion

Location: block/internal/executing/executor.go:623-625

for i, tx := range batchData.Transactions {
    data.Txs[i] = tx  // types.Tx is already []byte
}

Recommendation: Replace with more efficient slice append:

data.Txs = append(data.Txs, batchData.Transactions...)

Architectural Observations

Positive Aspects

  1. Clean Separation of Concerns: The Runnable and Recoverable interfaces provide a clean abstraction for leader/follower modes.

  2. Comprehensive Testing: The election tests (pkg/raft/election_test.go) cover critical edge cases including:

    • Leadership loss during sync wait
    • Recovery flow when not synced
    • Worker errors surfacing correctly
  3. State Validation: The IsSyncedWithRaft() methods in both executor and syncer include hash verification, not just height checks - this is excellent for catching state divergence.

  4. Graceful Degradation: The non-blocking send to applyCh prevents the Raft cluster from being blocked by slow consumers.

  5. Recovery Mechanism: The RecoverFromRaft() implementation properly handles rollback when local state diverges from Raft consensus.

Design Considerations

  1. Bootstrap Semantics: The bootstrap workflow needs clearer documentation:

    • What happens when a node restarts?
    • How do additional nodes join?
    • What's the relationship between Bootstrap flag and Peers configuration?
  2. Leadership Transfer Logic: The leadershipTransfer() call on line pkg/raft/election.go:119 is a good safety mechanism, but consider:

    • What if no suitable leader exists?
    • Should there be a delay/retry mechanism?
    • Document the tradeoff: "better to have no leader than double-sign"
  3. State Synchronization Model: There are multiple sync paths:

    • Raft consensus (primary for leaders)
    • DA layer (backup for followers)
    • P2P gossip (supplementary)

    Document the priority and fallback strategy clearly.


Concurrency & Resource Management

Strengths

  1. Proper Use of atomic.Pointer: Both FSM state and executor state use atomic.Pointer[T] for lock-free reads.

  2. Context Propagation: Contexts are properly propagated and cancellation is handled throughout.

  3. WaitGroup Usage: Proper use of sync.WaitGroup for goroutine lifecycle management in election.go and raft_retriever.go.

  4. Graceful Shutdown: The failover state cleanup uses timeout contexts (3s) to prevent indefinite hangs.

Areas for Improvement

  1. Raft Node Lifecycle: In node/full.go:272-276, the Raft node is started before the election begins. Consider:

    • What if Raft Start fails partway through?
    • Is the cleanup properly handled in all error paths?
  2. Channel Buffer Sizing: The applyCh buffer size is hardcoded to 1 in raft_retriever.go:62. Consider:

    • Making this configurable
    • Monitoring channel fullness metrics
  3. Resource Cleanup: In node/failover.go:206, p2pClient.Close() has // nolint: errcheck. While this is in a defer, consider logging errors for debugging.


Error Handling & Resilience

Strengths

  1. Error Wrapping: Consistent use of fmt.Errorf("context: %w", err) throughout.

  2. Multi-Error Accumulation: node/full.go properly uses errors.Join() to accumulate shutdown errors.

  3. Critical Error Channel: The executor has a dedicated errorCh for reporting unrecoverable errors.

Concerns

  1. Error Recovery Strategy: When Broadcast() fails after local commit (executor.go:474), the strategy isn't clear:

    • Should block production continue?
    • Should the node step down as leader?
    • Document the expected behavior
  2. Bootstrap Failure Handling: If bootstrap fails in node.go:143, the node returns an error but the Raft instance may be partially initialized. Consider cleanup.


Testing & Code Coverage

Test Coverage Analysis

From Codecov report, patch coverage is 38.81% with 607 missing lines. Key gaps:

  1. pkg/raft/node.go: Only 12.12% coverage with 174 missing lines

    • Critical paths like Broadcast(), AddPeer(), RemovePeer() need tests
    • Snapshot/restore logic untested
  2. pkg/raft/node_mock.go: 45.40% coverage

    • Generated mock, but consider integration tests
  3. block/internal/syncing/raft_retriever.go: 0% coverage (63 missing lines)

    • Critical component with no tests
    • Need tests for consumeRaftBlock() and error scenarios
  4. node/failover.go: 72.34% coverage (26 missing, 13 partials)

    • Better than most, but missing edge cases
    • Test resource cleanup on various error paths

Test Quality Observations

The existing tests in pkg/raft/election_test.go are well-structured:

  • Use table-driven tests
  • Test error paths and edge cases
  • Use mocks appropriately

Recommendations:

  1. Add integration tests for multi-node scenarios
  2. Test Raft snapshot/restore paths
  3. Add tests for raft_retriever.go
  4. Test bootstrap vs non-bootstrap node behavior
  5. Test network partition scenarios

Security Considerations

Positive Aspects

  1. No Hardcoded Secrets: No credentials or secrets found in the code.

  2. Input Validation: The splitPeerAddr() function validates peer address format and has good test coverage.

  3. Signature Verification: Based sequencer maintains signature verification in sync path.

Areas to Review

  1. Raft Network Security: The Raft transport uses plain TCP (raft.NewTCPTransport). Consider:

    • Is TLS required for production?
    • Are Raft peers authenticated?
    • Document security model for Raft cluster
  2. Proposer Validation: assertExpectedProposer() checks proposer address, but ensure this can't be bypassed via Raft consensus.

  3. State Rollback Security: The RecoverFromRaft() rollback logic could potentially be exploited. Ensure:

    • Only valid Raft consensus can trigger rollback
    • Rollback depth is bounded
    • State transitions are logged for audit

Configuration & Documentation

Configuration Completeness

New Raft-specific flags (from config.go:143-162):

  • raft.enable - Enable Raft consensus
  • raft.node_id - Node identifier
  • raft.raft_addr - Raft communication address
  • raft.raft_dir - Data directory
  • raft.bootstrap - Bootstrap flag
  • raft.peers - Peer list
  • raft.snap_count, timeouts, etc.

Good: Comprehensive configuration coverage.

Needs Documentation:

  1. How to configure a 3-node cluster step by step
  2. Relationship between bootstrap and peers flags
  3. What happens during node restart
  4. How to add/remove nodes safely
  5. Recovery procedures when nodes diverge

Code Comments

Strengths:

  • Interfaces are documented
  • Key methods have godoc comments
  • Complex logic (like election.go) has inline comments

Gaps:

  • pkg/raft/node.go needs more comprehensive package documentation
  • Bootstrap workflow needs explicit documentation
  • Leadership transition invariants should be documented

Performance Considerations

Potential Bottlenecks

  1. Raft Apply Serialization: Every block goes through protobuf marshal/unmarshal twice:

    • Once for Raft log entry
    • Once for applying to FSM
    • Consider whether this is acceptable for target throughput
  2. Blocking Broadcast: The Broadcast() call in executor.go:474 blocks until Raft replication completes (with timeout). This directly impacts block production latency.

  3. Snapshot Frequency: SnapCount configuration controls snapshot frequency. Too frequent = I/O overhead, too rare = slow recovery. Needs tuning guidance.

Optimization Opportunities

  1. Batching: Consider batching multiple small state updates into one Raft proposal.

  2. Async Broadcast: Consider making Raft broadcast async with a confirmation channel, allowing pipeline of block production.

  3. State Caching: The GetState() method clones the entire state. For high read frequency, consider read-through caching.


API & Interface Design

Strengths

  1. Clean Interfaces: Runnable, Recoverable, RaftNode interfaces are minimal and focused.

  2. Type Safety: Good use of Go generics in pendingBase[T].

  3. Nil Safety: Many methods check if n == nil to handle optional Raft node gracefully.

Suggestions

  1. RaftNode Interface: Consider exposing more observability:

    type RaftNode interface {
        // ... existing methods
        Stats() NodeStats  // cluster size, log index, etc.
        Health() error     // health check
    }
  2. Callback Interface: The SetApplyCallback() method uses a raw channel. Consider:

    type ApplyHandler interface {
        OnApply(msg RaftApplyMsg) error
    }

Recommendations Summary

Must Fix Before Merge

  1. ✅ Fix bootstrap mode restriction (node.go:112-115)
  2. ✅ Fix naked returns in full.go (lines 113-115, 117-119)
  3. 📝 Add integration tests for raft_retriever.go (0% coverage)
  4. 📝 Document bootstrap vs join workflow clearly

Should Fix Soon

  1. Increase test coverage for pkg/raft/node.go (currently 12%)
  2. Consider Raft broadcast failure handling strategy
  3. Add observability metrics for Raft state
  4. Document leadership transition invariants
  5. Review Raft network security model for production

Consider for Follow-up

  1. Make applyCh buffer size configurable
  2. Add Raft cluster management CLI commands (add/remove node)
  3. Implement Raft snapshot compression
  4. Add distributed tracing through Raft consensus path
  5. Performance testing with realistic throughput

Conclusion

This is a well-structured and thoughtful implementation of Raft-based HA. The architecture cleanly separates concerns, provides good recovery mechanisms, and includes reasonable test coverage of the election logic.

The critical issues (bootstrap restriction and naked returns) should be addressed before merge, as they prevent the feature from working correctly in multi-node deployments.

The low test coverage in key areas (pkg/raft/node.go, raft_retriever.go) is concerning for production readiness. While the election tests are good, the lack of integration tests for the core Raft node functionality increases risk.

Overall, this is production-quality code with some rough edges that need smoothing. The team clearly understands distributed systems challenges and has built appropriate safeguards. With the critical fixes and improved test coverage, this will be a solid HA implementation.

Estimated Effort for Critical Fixes: 4-8 hours
Recommended Testing Before Production: Multi-node chaos testing, network partition simulation, leadership churn scenarios


@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @alpe, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Rollkit network's resilience and fault tolerance by integrating the Raft consensus algorithm. The changes enable a robust failover mechanism, allowing nodes to seamlessly transition between leader and follower roles. This ensures continuous operation and consistent state across the network, even if individual nodes experience failures. The update touches various core components, from dependency management and block processing to RPC services and node lifecycle management, all aimed at achieving high availability.

Highlights

  • Raft Consensus Integration: Introduced Raft consensus for high availability, enabling dynamic leader election and state replication. This involves new dependencies on github.com/hashicorp/raft and github.com/hashicorp/raft-boltdb.
  • Node Architecture Refactoring: The node's core logic has been refactored to support a failover mechanism, allowing nodes to dynamically switch between aggregator (leader) and sync (follower) modes based on Raft's leadership status. This includes changes to how P2P clients are initialized and passed to node startup commands.
  • Block Processing with Raft: The block executor now checks Raft leadership before producing new blocks and proposes the block state to the Raft cluster after execution, ensuring state consistency across the cluster.
  • Cache and State Management Enhancements: Added methods to retrieve and persist the last submitted DA header and data heights from the cache. Logic was introduced to skip in-flight (temporary) headers and data during cache retrieval to prevent processing incomplete blocks.
  • New RPC Endpoints and Configuration: Added new RPC endpoints for querying Raft node status (e.g., /raft/node) and introduced new configuration flags for Raft-specific settings like node_id, raft_addr, bootstrap, and peers.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces high availability via RAFT consensus, which is a significant architectural change. The implementation is extensive, touching many parts of the system from node startup to block processing and syncing. Overall, the changes are well-structured, introducing clear interfaces for RAFT integration and refactoring the node components to support dynamic role switching between leader and follower. I've identified a few critical issues related to error handling and startup logic that need to be addressed, along with some medium-severity issues for improving code clarity and maintainability. Great work on this complex feature.

I am having trouble creating individual review comments. Click here to see my feedback.

node/full.go (113-115)

high

The naked return here will not propagate the error from newSingleRoleElector. This will cause the function to return (nil, nil), masking the underlying error. The error should be returned to the caller.

		if leaderElection, err = newSingleRoleElector(leaderFactory); err != nil {
			return nil, err
		}

node/full.go (117-119)

high

Similar to the previous case, the naked return here will swallow the error from newSingleRoleElector. The error should be propagated up to the caller.

		if leaderElection, err = newSingleRoleElector(followerFactory); err != nil {
			return nil, err
		}

pkg/raft/node.go (111-113)

high

This check prevents a node from starting if Bootstrap is false, which is problematic for nodes joining an existing cluster. A new node attempting to join will fail to start. The bootstrap logic should only execute if n.config.Bootstrap is true, and the function should return nil otherwise, allowing non-bootstrap nodes to start and join a cluster.

block/internal/cache/pending_headers.go (69-71)

medium

The method name GetLastSubmittedDataHeight is misleading as it's part of the PendingHeaders struct. For clarity and consistency, it should be renamed to GetLastSubmittedHeaderHeight.

This change will also require updating the call site in block/internal/cache/manager.go.

func (ph *PendingHeaders) GetLastSubmittedHeaderHeight() uint64 {
	return ph.base.getLastSubmittedHeight()
}

block/internal/executing/executor.go (570-572)

medium

The explicit type conversion types.Tx(tx) is redundant since types.Tx is an alias for []byte, and tx is already of type []byte. The change to a direct assignment is good, but it seems this loop could be replaced with a single, more efficient append call.

	data.Txs = append(data.Txs, batchData.Transactions...)

block/internal/syncing/syncer.go (184)

medium

This log message seems to be for debugging purposes, indicated by the +++ prefix. It should be logged at the Debug level instead of Info to avoid cluttering the logs in a production environment.

			s.logger.Debug().Uint64("header_height", state.LastSubmittedDAHeaderHeight).Uint64("data_height", state.LastSubmittedDADataHeight).Msg("received raft block state")

pkg/rpc/server/http.go (143-146)

medium

To adhere to Go's naming conventions for initialisms, the struct field NodeId should be renamed to NodeID.

				NodeID   string `json:"node_id"`
			}{
				IsLeader: raftNode.IsLeader(),
				NodeID:   raftNode.NodeID(),

@codecov
Copy link

codecov bot commented Jan 7, 2026

Codecov Report

❌ Patch coverage is 37.70813% with 636 lines in your changes missing coverage. Please review.
✅ Project coverage is 55.67%. Comparing base (453a8a4) to head (490b286).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
pkg/raft/node.go 12.12% 174 Missing ⚠️
pkg/raft/node_mock.go 45.40% 74 Missing and 21 partials ⚠️
block/internal/syncing/syncer.go 13.08% 91 Missing and 2 partials ⚠️
block/internal/syncing/raft_retriever.go 0.00% 63 Missing ⚠️
node/full.go 32.30% 37 Missing and 7 partials ⚠️
block/internal/executing/executor.go 4.54% 38 Missing and 4 partials ⚠️
node/failover.go 72.34% 26 Missing and 13 partials ⚠️
pkg/raft/election.go 78.72% 14 Missing and 6 partials ⚠️
pkg/rpc/server/http.go 6.66% 13 Missing and 1 partial ⚠️
block/components.go 27.27% 7 Missing and 1 partial ⚠️
... and 14 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2954      +/-   ##
==========================================
- Coverage   58.74%   55.67%   -3.08%     
==========================================
  Files          93      104      +11     
  Lines        8863    10149    +1286     
==========================================
+ Hits         5207     5650     +443     
- Misses       3067     3857     +790     
- Partials      589      642      +53     
Flag Coverage Δ
combined 55.67% <37.70%> (-3.08%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants