Skip to content

feat(v17): Implement tracing channels#4670

Open
logaretm wants to merge 47 commits intographql:17.x.xfrom
logaretm:tracing-channel-support
Open

feat(v17): Implement tracing channels#4670
logaretm wants to merge 47 commits intographql:17.x.xfrom
logaretm:tracing-channel-support

Conversation

@logaretm
Copy link
Copy Markdown

@logaretm logaretm commented Apr 18, 2026

This PR implements tracing channel support as discussed in #4629

graphql-js auto-resolves node:diagnostics_channel at module load via process.getBuiltinModule, with a require fallback for Node 20.0–20.15. On runtimes where node:diagnostics_channel is unavailable (e.g. browsers) the load silently no-ops and emission sites short-circuit. APMs subscribe purely on the subscriber side — no interaction with the graphql API required.

Most of the changes are tests; the actual source changes are mostly tame and introduce wrappers around pre-existing code paths.

API

graphql-js vendors a minimal structural subset of the tracing channel types for internal use. Ideally these wouldn't include deep specifics like runStores, but that surface is required to maintain the sync-or-async behavior we have today. The alternative is to normalize to async for all calls which isn't ideal and probably would have a perf implication.

The implemented channels are:

Channel Context fields
graphql:parse source
graphql:validate schema, document
graphql:execute schema, document, variableValues, operationName (lazy), operationType (lazy)
graphql:execute schema, operation, variableValues, operationName, operationType
graphql:subscribe schema, document, variableValues, operationName (lazy), operationType (lazy)
graphql:resolve fieldName, parentType, fieldType, args, isTrivialResolver, fieldPath (lazy)

Usage

APMs and users subscribe by channel name on their own node:diagnostics_channel import — no graphql API call needed:

import dc from 'node:diagnostics_channel';

// Subscribe to any of the 5 tracing channels; an APM would typically
// subscribe to all of them.
dc.tracingChannel('graphql:resolve').subscribe({
  start: () => {
    // start a span or log or whatever....
  },
  end: ({ result }) => {
    // if result is present then it is a sync resolve, otherwise they should end the span in asyncEnd
  },
  asyncEnd: ({ result }) => {
    // end the span
  },
  error: ({ error }) => {
    // report resolve errors
  }
});

I tested this locally in a dashboard and it produces waterfall structure like so with basic Sentry integration:

CleanShot 2026-04-18 at 12 30 49@2x

You can view that trace here.

closes #4629

Copilot AI review requested due to automatic review settings April 18, 2026 04:22
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 18, 2026

@logaretm is attempting to deploy a commit to the The GraphQL Foundation Team on Vercel.

A member of the Team first needs to authorize it.

@linux-foundation-easycla
Copy link
Copy Markdown

linux-foundation-easycla Bot commented Apr 18, 2026

CLA Signed

The committers listed above are authorized under a signed CLA.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces opt-in diagnostics tracing channels for GraphQL core operations while keeping the codebase runtime-agnostic (no direct node:diagnostics_channel dependency) by requiring consumers to register a compatible module via enableDiagnosticsChannel().

Changes:

  • Added src/diagnostics.ts with minimal duck-typed interfaces and maybeTrace* helpers, plus public exports from src/index.ts.
  • Wrapped parse, validate, execute/subscribe, and field resolution to emit graphql:* tracing channel lifecycles with lazy context fields where appropriate.
  • Added unit tests (via a fake diagnostics channel) and a Node integration test project to validate real node:diagnostics_channel behavior.

Reviewed changes

Copilot reviewed 16 out of 16 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/diagnostics.ts New core diagnostics/tracing channel registration + trace helper implementations.
src/index.ts Exposes enableDiagnosticsChannel and related public types from the package entrypoint.
src/language/parser.ts Wraps parse() with tracing emission.
src/validation/validate.ts Wraps validate() with tracing emission.
src/execution/execute.ts Wraps execute()/subscribe() and related execution entrypoints with tracing + lazy operation metadata.
src/execution/Executor.ts Wraps field resolver invocation to emit graphql:resolve lifecycle events with lazy fieldPath.
src/testUtils/fakeDiagnosticsChannel.ts Adds fake tracing channel implementation to support deterministic unit tests.
src/tests/diagnostics-test.ts Verifies registration behavior and channel identity semantics.
src/language/tests/parser-diagnostics-test.ts Validates graphql:parse lifecycle emission.
src/validation/tests/validate-diagnostics-test.ts Validates graphql:validate lifecycle emission including error paths.
src/execution/tests/execute-diagnostics-test.ts Validates graphql:execute lifecycle emission for sync/async/error paths.
src/execution/tests/subscribe-diagnostics-test.ts Validates graphql:subscribe emission for sync vs async subscription setup.
src/execution/tests/resolve-diagnostics-test.ts Validates graphql:resolve emission for sync/async/error and lazy fieldPath.
resources/integration-test.ts Adds the new diagnostics integration test project to the integration suite.
integrationTests/diagnostics/test.js Real Node integration tests for channel lifecycles and ALS propagation.
integrationTests/diagnostics/package.json Adds diagnostics integration test project metadata and node engine constraint.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/diagnostics.ts Outdated
Comment thread src/__testUtils__/fakeDiagnosticsChannel.ts Outdated
Comment thread integrationTests/diagnostics/package.json Outdated
@logaretm logaretm force-pushed the tracing-channel-support branch from 5333b30 to faf327b Compare April 18, 2026 04:30
Comment thread src/diagnostics.ts Outdated
@yaacovCR
Copy link
Copy Markdown
Contributor

The implemented channels are...

Please excuse my limited understanding of the current tracing space, but is there a concept of nesting of channels? In theory, I would expect we would want to:

(1) nest the "execute" channel for subscription events under "subscribe" and
(2) nest "resolve" under "execute"

Naively, I would assume this would be very useful information to understanding the hierarchy.

And I would think this would provide a way to pass the overarching context of the subscribe option to execute (i.e. execute would always have the document, as in your description of the channels/context within the linked issue #4629)?

And similarly, we could also have the overall execute (or subscribe/execute) passed to resolve?

@logaretm
Copy link
Copy Markdown
Author

@yaacovCR Happy to elaborate more!

is there a concept of nesting of channels?

From a purely diag/tracing channels API perspective, there's no parent/child relationship in the API. Each channel is independent. graphql:execute and graphql:resolve don't know about each other at the module level.

That said, the hierarchy you're describing does exist at runtime, it's just reconstructed on the subscriber side rather than on the emitter side. The mechanism is done through AsyncLocalStorage:

You can check the image I had in the PR description that represents how it looks like, or check this trace view as you can see the spans are nested under one another.

This works because tracing channels propagate the async execution context (unlike plain event emitters). Subscribers bind an AsyncLocalStorage to the channel, and every downstream channel fired under that sees the same store via async_hooks. Think of it as an implicit requestId threaded through by the runtime.

It's why I kept the runStores wrapper in the types and impl even after dropping tracePromise. TracingChannel.traceSync/tracePromise wrap runStores for you, but we roll the lifecycle manually here to handle the mixed sync-or-promise return path cleanly, so we need to call runStores ourselves to preserve ALS propagation.


Now, for graphql as the emitter, there are a couple of reasons not to encode the hierarchy ourselves:

  • GraphQL wouldn't know if the channels are nested, unless we implement logic that does that or use ALS ourselves but that's strictly a node API that's not polyfillable.
  • We'd have to track the hierarchy unconditionally even when nothing is subscribed because we don't know at emit time whether a subscriber cares. That defeats the hasSubscribers short-circuit we use on the hot paths today.
  • Subscribers can attach to one channel or several or all of them. If graphql baked the hierarchy into the ctx object, we risk breaking it for those partial subs.

The good news is subscribers will have access to document and other parent event objects naturally through their ALS if they want to, graphql doesn't have to do it and subscribers get it for free.

Here is a quick example snippet:

import { AsyncLocalStorage } from 'node:async_hooks';
import dc from 'node:diagnostics_channel';
import { enableDiagnosticsChannel } from 'graphql';

enableDiagnosticsChannel(dc);

const executeCtx = new AsyncLocalStorage();

const executeChannel = dc.tracingChannel('graphql:execute');
const resolveChannel = dc.tracingChannel('graphql:resolve');

// Bind on `start` only, the store activates when execute's runStores frame
// opens and stays active through the entire lifecycle. `end`, `asyncStart`,
// `asyncEnd`, and `error` all fire inside that frame (sync handlers directly,
// async ones via async_hooks propagation), so they inherit the same store
// without any extra wiring.
executeChannel.start.bindStore(executeCtx, ctx => ({
  document: ctx.document,
  operationName: ctx.operationName,
  startedAt: Date.now(),
}));

// Subscribe to the full execute lifecycle. Every handler sees the same store.
executeChannel.subscribe({
  start: () => {
    const { operationName } = executeCtx.getStore();
    console.log(`[execute:start] ${operationName}`);
  },
  end: () => {
    const { operationName, startedAt } = executeCtx.getStore();
    console.log(`[execute:end]   ${operationName} (+${Date.now() - startedAt}ms sync)`);
  },
  asyncStart: () => {
    const { operationName } = executeCtx.getStore();
    console.log(`[execute:asyncStart] ${operationName}`);
  },
  asyncEnd: () => {
    const { operationName, startedAt } = executeCtx.getStore();
    console.log(`[execute:asyncEnd]   ${operationName} (+${Date.now() - startedAt}ms total)`);
  },
  error: ({ error }) => {
    const { operationName } = executeCtx.getStore();
    console.log(`[execute:error] ${operationName} -> ${error.message}`);
  },
});

// Nested channels inherit the parent's store too, because resolve fires
// beneath execute's runStores frame.
resolveChannel.subscribe({
  start: resolveCtx => {
    const parent = executeCtx.getStore();
    console.log(`  [resolve:start] ${resolveCtx.fieldPath} (under ${parent.operationName})`);
  },
  asyncEnd: resolveCtx => {
    const parent = executeCtx.getStore();
    console.log(`  [resolve:asyncEnd] ${resolveCtx.fieldPath} (under ${parent.operationName})`);
  },
  error: ({ error }, name) => {
    const parent = executeCtx.getStore();
    console.log(`  [resolve:error] under ${parent.operationName}: ${error.message}`);
  },
});

@yaacovCR
Copy link
Copy Markdown
Contributor

@yaacovCR Happy to elaborate more!

This was hugely helpful. On this background, I am wondering whether instead of:

Channel Context fields
graphql:parse source
graphql:validate schema, document
graphql:execute schema, document, variableValues, operationName (lazy), operationType (lazy)
graphql:subscribe schema, document, variableValues, operationName (lazy), operationType (lazy)
graphql:resolve fieldName, parentType, fieldType, args, isTrivialResolver, fieldPath (lazy)

Maybe something like the below would be better for our official tracing channels:

Channel Context fields
graphql:parse source
graphql:validate schema, document
graphql:execute schema, document, variableValues, operationName (lazy), operationType (lazy)
graphql:executeQueryMutationOrSubscriptionEvent schema, operation, variableValues, operationName, operationType
graphql:resolve fieldName, parentType, fieldType, args, isTrivialResolver, fieldPath (lazy)

The idea would be that subscribe is not really "a thing" according to the spec, and in theory our separate execute and subscribe endpoints are just a way to keep track of the relationship between execute and executeSubscriptionEvent. This would nest executeQueryMutationOrSubscriptionEvent under every execute call for queries, mutations, and subscriptions, with one executeQueryMutationOrSubscriptionEvent for each query/mutation execute, and a variable number of executeQueryMutationOrSubscriptionEvent for a subscription execute.

The downside is you can no longer top-level distinguish between subscriptions and queries/mutations. Do we want to preserve that? Do we want to top-level distinguish between each of our operations? What about between queries and mutations?

Again, I am asking these questions with very little knowledge of how the tracing system is used in practice, hopefully you can guide me here?

Looping in @graphql/tsc if they have any thoughts.

@logaretm
Copy link
Copy Markdown
Author

@yaacovCR

Maybe something like the below would be better for our official tracing channels:

We could do graphql:operation as top level and graphql:execute 1x for queries/mutations and N x per sub. Does that track or do we want that lengthy name instead 😅

The downside is you can no longer top-level distinguish between subscriptions and queries/mutations...

The distinction can be done with the args already sent to the execute channel via the operationType property we have right now. Channel level distinction is unlikely to be worth it given how low the overhead is.

I initially added a subscribe channel because it felt like its own thing, but it is better to name channels from GQL's POV, like "I'm doing this thing right here....".

What do u think?

@yaacovCR
Copy link
Copy Markdown
Contributor

What do u think?

Naming = so hard. Definitely like shorter. But I do prefer consistency with the other verbs we have for our channels.

We could consider:

  • execute vs. executeQueryMutationOrSubscriptionEvent (not great if another operation type lands)
  • operation vs. execute (not great as operation = noun)
  • execute vs. operation (ha! both are not great as it's not clear really which should be the wrapper, although this is incorrect as the nested executeSubscriptionEvent is not an operation, it's more like a sub-operation which is not a thing.)
  • executeRequest vs. executeOperation (not correct, a subscription operation does not contain multiple underlying operations, as above)
  • executeRequest vs. executeRootSelectionSet (correct and technically disambiguated, as when we execute a single root field to create the source event stream for a subscription, we do not [currently] allow for merging multiple fields by utilizing multiple root fields, so on the subscription side it is [presently] executeSingleAllowedRootField and if we ever change that we could just call that part createSourceEventStream)

I am leaning toward the last option, again looping in better namers @graphql/tsc @graphql/tsc-emeriti

@yaacovCR
Copy link
Copy Markdown
Contributor

@logaretm

From my machine on 17.x.x I ran:

npm run benchmark -- benchmark/introspectionFromSchema-benchmark.js benchmark/list-sync-benchmark.js benchmark/list-async-benchmark.js benchmark/list-asyncIterable-benchmark.js benchmark/object-async-benchmark.js --revs pr/4670 17.x.x

And got:

image
  1. On our real-world large sync query we got a 7% slowdown. This is the most concerning result.
  2. On a synthetic sync query with a lot of list items we got a 23% slowdown, also concerning, although the query is artificial.
  3. On synthetic async queries, we were at parity or faster, somehow thanks to this PR and presumably v8 magic!

Not sure if you have any insight. I would assume most of this is secondary to graphql:resolve. Ideally we would of course have no slowdowns at all, at least on these benchmarks when tracing is not enabled.

@logaretm
Copy link
Copy Markdown
Author

@yaacovCR I will re-work the PR to use the autoloading snippet and I will dig into the benchmarks and see what I can do, ideally it should offer close to zero slowdown so I must be doing something wrong.

@yaacovCR
Copy link
Copy Markdown
Contributor

  • executeRequest vs. executeRootSelectionSet (correct and technically disambiguated, as when we execute a single root field to create the source event stream for a subscription, we do not [currently] allow for merging multiple fields by utilizing multiple root fields, so on the subscription side it is [presently] executeSingleAllowedRootField and if we ever change that we could just call that part createSourceEventStream)

I am leaning toward the last option, again looping in better namers @graphql/tsc @graphql/tsc-emeriti

Although maybe we should do instead:
graphql:execute and graphql:execute:rootSelectionSet
or even
graphql:execute and graphql:execute:root-selection-set

is there a convention in the diagnostic channels world?

@logaretm
Copy link
Copy Markdown
Author

@yaacovCR Not a strict convention, but this is what is currently recommended.

For the names, I think we can keep the inner one called graphql:execute for the outer one, how about:

  • graphql:request
  • graphql:run
  • graphql:process
  • graphql:handle

Naming is hard 😅

@yaacovCR
Copy link
Copy Markdown
Contributor

Not necessarily related, but possibly, what would a trace look like for incremental delivery?

Options abound because of the disconnect between execution groups and delivery groups.

@logaretm logaretm force-pushed the tracing-channel-support branch from a001dd3 to 03c9fea Compare April 24, 2026 14:17
@logaretm
Copy link
Copy Markdown
Author

logaretm commented Apr 24, 2026

@yaacovCR So I pushed a couple of optimizations, I think it is down significantly after running the benchmark u gave me. It was at 6% overhead at the time of your posting on my machine and now its down to 2.3%-3%.

Can u have a go and see if that's acceptable? I have more ideas but will require juggling alternate Executor class implementations.


Not necessarily related, but possibly, what would a trace look like for incremental delivery?

Right now they aren't handled and any deferred executions will be orphaned (up to the APM how to deal with that) since they will still fire the resolve channels.

So it would look like this:

  time ──────────────────────────────────────────────────────►

  execute   ◄────── runStores frame #1 ──────►
  ├─ resolve fieldA        (#2, parented to #1)  ✓
  ├─ resolve fieldB        (#3, parented to #1)  ✓
  └─ returns {initialResult, subsequentResults}
                      ▲ frame #1 ends here ─ generator has not run yet

          ─── gap: framework owns the generator ───

           await subsequentResults.next()   (fresh async context, no ALS)
                                                                       
  resolve fieldC (@defer)  (#4, ⚠ ORPHANED — no parent span)
  resolve fieldD (@stream) (#5, ⚠ ORPHANED — no parent span)

I think we can keep the current PR scoped to this at the moment, and I can try a follow up if this one goes through. I'm thinking we can add another :group channel, so it looks like:

 time ──────────────────────────────────────────────────────►

  execute   ◄────── runStores frame #1 ──────►
  ├─ resolve fieldA        (#2, parented to #1)  ✓
  ├─ resolve fieldB        (#3, parented to #1)  ✓
  └─ returns {initialResult, subsequentResults}
                      ▲ frame #1 ends

          ─── gap: framework drives the generator ───

           await subsequentResults.next()

  execute:group 'slowDefer'  ◄── runStores frame #4 ──►
  └─ resolve fieldC (@defer) (#5, parented to #4)  ✓

  execute:group 'feed@stream' ◄── runStores frame #6 ──►
  └─ resolve fieldD (@stream) (#7, parented to #6)  ✓

The group events look like extra spans, but their real job is to keep an ALS frame alive while the deferred resolvers run, so anything the APM bound stays readable. I'll verify end-to-end in the follow-up

@logaretm logaretm force-pushed the tracing-channel-support branch 2 times, most recently from 183bb06 to bbddc0c Compare April 24, 2026 15:34
@logaretm
Copy link
Copy Markdown
Author

Squeezed a bit of performance with the last couple of commits 🙌 for the channel names happy to pick whatever works for us here to make this ready to go.

@yaacovCR yaacovCR force-pushed the tracing-channel-support branch 3 times, most recently from db814a5 to 3d1de2a Compare April 26, 2026 14:19
Copy link
Copy Markdown

@Qard Qard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm the creator of diagnostics_channel, for those that don't know me. 👋🏻

Generally, LGTM. I'd like to know more about the exact timing on resolve tracing. Can those be effectively linked together via channel.bindStore(...) triggering nested resolvers within the scope of their parent?

@logaretm
Copy link
Copy Markdown
Author

@Qard ATM nested resolvers are not linked together as children, they instead execute and appear as siblings. However they all appear as a child of the execute channel invocation.

CleanShot 2026-04-26 at 20 05 59@2x

I realize this isn't the ideal outcome, I will experiment with expanding the runStores callback to include resolve child recursion and see how it fares in benchmarks.

@Qard
Copy link
Copy Markdown

Qard commented Apr 27, 2026

Ideally we'd want that causality. Might be sufficient to pass the parent resolver info into the child events as that is past into the resolvers themselves. 🤔

@logaretm
Copy link
Copy Markdown
Author

logaretm commented Apr 27, 2026

@Qard I widened the runStores frame to include the nested resolvers, and got something looking like this

CleanShot 2026-04-26 at 21 10 16@2x

Perf wise, it has the same perf with no subs as the current state, all near 0% diff so it didn't introduce cold paths. Now comparing it against the current impl (no nesting) with subs attached, the numbers are very similar.

Benchmark Δ
Introspection −0.2%
Sync list −1.0%
Async list −1.8%
Async iterable −2.8%
Object-async −6.8%

So only one case has seen a slow down which I think would come to a wash once an APM does somethings anyways. So we basically get it for free here, WDYT? worth including or maybe let the subscriber stitch it together? (they can with the path information in each call).

logaretm and others added 25 commits April 27, 2026 20:21
…aceSync

The traced branch of `executeIgnoringIncremental` was not exercised by
any subscriber-attached test, tripping the 100% coverage threshold on
CI. Add a case in execute-diagnostics-test that subscribes to
`graphql:execute` and asserts start/end around a call.

Also drop the `traceSync` helper from diagnostics.ts; every call site
invokes `channel.traceSync(...)` directly, so the export was dead code.

Mark the `require('node:diagnostics_channel')` CJS fallback and the
enclosing `catch` block with c8 ignore comments — they are only
reachable on Node 20.0-20.15 (pre-`getBuiltinModule`) and on runtimes
without `diagnostics_channel` at all, neither of which the unit tests
run on.
Hoist the resolve tracing decision once per `executeFields` /
`executeFieldsSerially` call instead of re-checking
`shouldTrace(resolveChannel)` per field, and move the traced branch's
`traceMixed` call + ctx closure into a module-scope helper
`invokeResolverWithTracing`. The latter is the load-bearing change:
once `executeField`'s body referenced `traceMixed` and the
`() => resolveFn(...)` closure, V8 stopped inlining `executeField` into
`executeFields`, paying a real call frame per field. Extracting the
two-function tail keeps `executeField`'s bytecode small enough to
inline again.

Effect on the introspection benchmark (the worst-case fan-out shape,
~1000 resolver calls per iteration, no subscribers attached):

- with the inline ternary: -5.94% vs upstream
- with this refactor:     -2.05% / -2.30% across two paired runs

Other benchmarks (list-sync, list-async, list-asyncIterable,
object-async) sit at the noise floor with 95% CIs that cross zero.

Adds a serial-mutation case to resolve-diagnostics-test to cover the
`executeFieldsSerially` branch the hoisted snapshot now lives in.
also combine execute/subscribe
…e the original passed values, not the internal type
now that the files are longer... :)
Bun's node:diagnostics_channel ships TracingChannel but does not expose the
aggregate hasSubscribers getter, so a subscriber added via
tracingChannel.subscribe(handlers) is invisible to the previous
?.hasSubscribers gate and graphql-js silently drops every event.

Introduce a shouldTrace helper that trusts the aggregate when present and
falls back to checking each of the five underlying lifecycle channels when
it is undefined. Route every emission gate (parse/validate/execute/
subscribe/resolve) through it.
V8 inlines executeField slightly better when the never-taken-on-Node
branch is the trailing fallback rather than an early-returning if body.
The flipped form cost ~5% on introspection; the original ordering plus
c8 ignore start/stop over the Bun-only branch keeps both 100% coverage
and the perf characteristics.
@yaacovCR yaacovCR force-pushed the tracing-channel-support branch from 50ea29c to 78f3c31 Compare April 27, 2026 17:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants