Skip to content

perf(http2): avoid response header reserialization#5085

Open
trivikr wants to merge 1 commit intonodejs:mainfrom
trivikr:avoid-response-header-reserialization
Open

perf(http2): avoid response header reserialization#5085
trivikr wants to merge 1 commit intonodejs:mainfrom
trivikr:avoid-response-header-reserialization

Conversation

@trivikr
Copy link
Copy Markdown
Member

@trivikr trivikr commented Apr 21, 2026

This relates to...

N/A

Rationale

HTTP/2 responses currently pass header objects through parts of the dispatcher stack, while the responseHeaders: 'raw' path still expects name/value arrays. That mismatch causes unnecessary header reserialization and makes the raw-header path inconsistent across H1 and H2 callers.

Changes

  • Stop reserializing HTTP/2 response and upgrade headers into buffer pairs before handing them to request handlers.
  • Extend parseRawHeaders() to accept nullish values and plain header objects, and normalize them into the existing flat raw-header array format.
  • Reuse that normalization in the request, stream, pipeline, connect, and upgrade APIs when responseHeaders: 'raw' is requested.
  • Update fetch response header list construction so it works with either raw header arrays or object-form response headers.
  • Expand types so DispatchController.rawHeaders / rawTrailers can be IncomingHttpHeaders.

Features

N/A

Bug Fixes

  • Fix responseHeaders: 'raw' for HTTP/2 responses so callers receive normalized raw headers without relying on H1-style buffer pairs.
  • Avoid redundant header conversion work on the HTTP/2 response path.
  • Preserve fetch/WebSocket header list construction when raw headers arrive in object form.

Breaking Changes and Deprecations

N/A

Status

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 21, 2026

Codecov Report

❌ Patch coverage is 95.16129% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 93.14%. Comparing base (5878f54) to head (1272384).

Files with missing lines Patch % Lines
lib/web/fetch/index.js 90.90% 3 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #5085   +/-   ##
=======================================
  Coverage   93.13%   93.14%           
=======================================
  Files         110      110           
  Lines       36104    36111    +7     
=======================================
+ Hits        33624    33634   +10     
+ Misses       2480     2477    -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Assisted-by: openai:gpt-5.4
Signed-off-by: Kamat, Trivikram <16024985+trivikr@users.noreply.github.com>
@trivikr trivikr force-pushed the avoid-response-header-reserialization branch from 516ad84 to 1272384 Compare April 26, 2026 16:12
@trivikr trivikr requested review from mcollina and metcoder95 April 26, 2026 16:15
@trivikr
Copy link
Copy Markdown
Member Author

trivikr commented Apr 26, 2026

I ran the existing benchmarks in bench:h2, but couldn't see significant difference

Main

$ benchmarks> npm run bench:h2
...
[bench:run:h2] 17429794.56
[bench:run:h2] 14705542.4
[bench:run:h2] 10973610.666666666
[bench:run:h2] 8226152.96
[bench:run:h2] 7671800
[bench:run:h2] ┌─────────┬─────────────────────┬─────────┬────────────────────┬────────────┬─────────────────────────┬─────────────────────────┐
[bench:run:h2] │ (index) │ Tests               │ Samples │ Result             │ Tolerance  │ Difference with Slowest │ Difference with slowest │
[bench:run:h2] ├─────────┼─────────────────────┼─────────┼────────────────────┼────────────┼─────────────────────────┼─────────────────────────┤
[bench:run:h2] │ 0       │ 'undici - dispatch' │ 0       │ 'Errored'          │ 'N/A'      │ 'N/A'                   │                         │
[bench:run:h2] │ 1       │ 'native - http2'    │ 50      │ '5737.30 req/sec'  │ '± 2.93 %' │                         │ '-'                     │
[bench:run:h2] │ 2       │ 'undici - fetch'    │ 20      │ '6800.16 req/sec'  │ '± 2.96 %' │                         │ '+ 18.53 %'             │
[bench:run:h2] │ 3       │ 'undici - pipeline' │ 30      │ '9112.77 req/sec'  │ '± 2.75 %' │                         │ '+ 58.83 %'             │
[bench:run:h2] │ 4       │ 'undici - request'  │ 25      │ '12156.35 req/sec' │ '± 2.76 %' │                         │ '+ 111.88 %'            │
[bench:run:h2] │ 5       │ 'undici - stream'   │ 20      │ '13034.75 req/sec' │ '± 2.87 %' │                         │ '+ 127.19 %'            │
[bench:run:h2] └─────────┴─────────────────────┴─────────┴────────────────────┴────────────┴─────────────────────────┴─────────────────────────┘

This branch

$ benchmarks> npm run bench:h2
...
[bench:run:h2] 16233540.654545454
[bench:run:h2] 14560871.68
[bench:run:h2] 11115562.971428571
[bench:run:h2] 8099668.8
[bench:run:h2] 7945346.4
[bench:run:h2] ┌─────────┬─────────────────────┬─────────┬────────────────────┬────────────┬─────────────────────────┬─────────────────────────┐
[bench:run:h2] │ (index) │ Tests               │ Samples │ Result             │ Tolerance  │ Difference with Slowest │ Difference with slowest │
[bench:run:h2] ├─────────┼─────────────────────┼─────────┼────────────────────┼────────────┼─────────────────────────┼─────────────────────────┤
[bench:run:h2] │ 0       │ 'undici - dispatch' │ 0       │ 'Errored'          │ 'N/A'      │ 'N/A'                   │                         │
[bench:run:h2] │ 1       │ 'native - http2'    │ 55      │ '6160.09 req/sec'  │ '± 2.97 %' │                         │ '-'                     │
[bench:run:h2] │ 2       │ 'undici - fetch'    │ 25      │ '6867.72 req/sec'  │ '± 2.72 %' │                         │ '+ 11.49 %'             │
[bench:run:h2] │ 3       │ 'undici - pipeline' │ 35      │ '8996.40 req/sec'  │ '± 2.65 %' │                         │ '+ 46.04 %'             │
[bench:run:h2] │ 4       │ 'undici - request'  │ 20      │ '12346.18 req/sec' │ '± 2.89 %' │                         │ '+ 100.42 %'            │
[bench:run:h2] │ 5       │ 'undici - stream'   │ 20      │ '12585.98 req/sec' │ '± 2.63 %' │                         │ '+ 104.32 %'            │
[bench:run:h2] └─────────┴─────────────────────┴─────────┴────────────────────┴────────────┴─────────────────────────┴─────────────────────────┘

@trivikr
Copy link
Copy Markdown
Member Author

trivikr commented Apr 26, 2026

I tested with the following temporary harness which shows 5-7% improvement (80 rounds x 200 parallel requests)

'use strict'

const http2 = require('node:http2')
const { once } = require('node:events')
const { performance } = require('node:perf_hooks')

const { H2CClient } = require('..')

const runs = parseInt(process.env.RUNS, 10) || 5
const rounds = parseInt(process.env.ROUNDS, 10) || 80
const parallel = parseInt(process.env.PARALLEL, 10) || 200
const warmupRounds = parseInt(process.env.WARMUP_ROUNDS, 10) || 5

const body = Buffer.from('ok')

function formatNumber (n) {
  return n.toLocaleString('en-US', { maximumFractionDigits: 2 })
}

function makeServer () {
  const server = http2.createServer({
    settings: {
      maxConcurrentStreams: parallel
    }
  })

  server.on('stream', (stream) => {
    stream.respond({
      ':status': 200,
      'content-type': 'text/plain',
      'content-length': body.length
    })
    stream.end(body)
  })

  return server
}

function makeRoundRequests (client) {
  const requests = new Array(parallel)

  for (let i = 0; i < parallel; ++i) {
    requests[i] = client.request({
      path: '/',
      method: 'GET'
    }).then(({ body }) => body.dump())
  }

  return Promise.all(requests)
}

async function timeRound (client) {
  const start = performance.now()
  await makeRoundRequests(client)
  return performance.now() - start
}

async function closeClient (client) {
  await new Promise((resolve, reject) => {
    client.close((err) => {
      if (err) {
        reject(err)
        return
      }

      resolve()
    })
  })
}

async function main () {
  const server = makeServer()
  server.listen(0, '127.0.0.1')
  await once(server, 'listening')

  const origin = `http://127.0.0.1:${server.address().port}`
  const clients = new Array(runs)

  for (let i = 0; i < runs; ++i) {
    clients[i] = new H2CClient(origin, {
      maxConcurrentStreams: parallel,
      pipelining: parallel
    })
  }

  try {
    for (let i = 0; i < runs; ++i) {
      for (let j = 0; j < warmupRounds; ++j) {
        await makeRoundRequests(clients[i])
      }
    }

    const results = Array.from({ length: runs }, () => ({
      elapsed: 0,
      requests: 0
    }))

    for (let round = 0; round < rounds; ++round) {
      for (let run = 0; run < runs; ++run) {
        const elapsed = await timeRound(clients[run])
        results[run].elapsed += elapsed
        results[run].requests += parallel
      }
    }

    const rows = results.map((result, i) => {
      const reqSec = result.requests / (result.elapsed / 1000)

      return {
        Run: i + 1,
        Rounds: rounds,
        Requests: result.requests,
        'Elapsed (ms)': formatNumber(result.elapsed),
        'Req/sec': formatNumber(reqSec)
      }
    })

    const avgReqSec = results.reduce((total, result) => {
      return total + result.requests / (result.elapsed / 1000)
    }, 0) / results.length

    console.log(`${runs} interleaved runs, ${rounds} rounds x ${parallel} parallel .request() calls`)
    console.table(rows)
    console.log(`Average of ${runs}: ${formatNumber(avgReqSec)} req/sec`)
  } finally {
    await Promise.all(clients.map(closeClient))
    await new Promise((resolve) => server.close(resolve))
  }
}

main().catch((err) => {
  console.error(err)
  process.exitCode = 1
})

Main

$ benchmarks> node benchmarks/h2-request-harness.js
5 interleaved runs, 80 rounds x 200 parallel .request() calls
┌─────────┬─────┬────────┬──────────┬──────────────┬─────────────┐
│ (index) │ Run │ Rounds │ Requests │ Elapsed (ms) │ Req/sec     │
├─────────┼─────┼────────┼──────────┼──────────────┼─────────────┤
│ 0       │ 1   │ 80     │ 16000    │ '369.18'     │ '43,339.47' │
│ 1       │ 2   │ 80     │ 16000    │ '366.99'     │ '43,598.06' │
│ 2       │ 3   │ 80     │ 16000    │ '364.91'     │ '43,846.41' │
│ 3       │ 4   │ 80     │ 16000    │ '362.38'     │ '44,152.61' │
│ 4       │ 5   │ 80     │ 16000    │ '373.23'     │ '42,868.86' │
└─────────┴─────┴────────┴──────────┴──────────────┴─────────────┘
Average of 5: 43,561.08 req/sec

PR branch

$ benchmarks> node benchmarks/h2-request-harness.js
5 interleaved runs, 80 rounds x 200 parallel .request() calls
┌─────────┬─────┬────────┬──────────┬──────────────┬─────────────┐
│ (index) │ Run │ Rounds │ Requests │ Elapsed (ms) │ Req/sec     │
├─────────┼─────┼────────┼──────────┼──────────────┼─────────────┤
│ 0       │ 1   │ 80     │ 16000    │ '346.15'     │ '46,222.63' │
│ 1       │ 2   │ 80     │ 16000    │ '350.39'     │ '45,662.89' │
│ 2       │ 3   │ 80     │ 16000    │ '345.96'     │ '46,248.54' │
│ 3       │ 4   │ 80     │ 16000    │ '331.03'     │ '48,334.55' │
│ 4       │ 5   │ 80     │ 16000    │ '336.09'     │ '47,605.81' │
└─────────┴─────┴────────┴──────────┴──────────────┴─────────────┘
Average of 5: 46,814.88 req/sec

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants