Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Editorial changes for Event Streams #1099

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions spec/Section 2 -- Language.md
Original file line number Diff line number Diff line change
Expand Up @@ -288,8 +288,8 @@ There are three types of operations that GraphQL models:

- query - a read-only fetch.
- mutation - a write followed by a fetch.
- subscription - a long-lived request that fetches data in response to source
events.
- subscription - a long-lived request that fetches data in response to a
sequence of events over time.

Each operation is represented by an optional operation name and a _selection
set_.
Expand Down
73 changes: 48 additions & 25 deletions spec/Section 6 -- Execution.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ ExecuteMutation(mutation, schema, variableValues, initialValue):

### Subscription

If the operation is a subscription, the result is an event stream called the
If the operation is a subscription, the result is an _event stream_ called the
"Response Stream" where each event in the event stream is the result of
executing the operation for each new event on an underlying "Source Stream".

Expand Down Expand Up @@ -217,14 +217,21 @@ chat room ID is the "topic" and each "publish" contains the sender and text.

**Event Streams**

An event stream represents a sequence of discrete events over time which can be
observed. As an example, a "Pub-Sub" system may produce an event stream when
"subscribing to a topic", with an event occurring on that event stream for each
"publish" to that topic. Event streams may produce an infinite sequence of
events or may complete at any point. Event streams may complete in response to
an error or simply because no more events will occur. An observer may at any
point decide to stop observing an event stream by cancelling it, after which it
must receive no more events from that event stream.
:: An _event stream_ represents a sequence of events: discrete emitted values
over time which can be observed. As an example, a "Pub-Sub" system may produce
an _event stream_ when "subscribing to a topic", with an value emitted for each
"publish" to that topic.

An _event stream_ may complete at any point, often because no further events
will occur. An _event stream_ may emit an infinite sequence of values, in which
it may never complete. If an _event stream_ encounters an error, it must
complete with that error.

An observer may at any point decide to stop observing an _event stream_ by
cancelling it. When an _event stream_ is cancelled, it must complete.

Internal user code also may cancel an _event stream_ for any reason, which would
be observed as that _event stream_ completing.

**Supporting Subscriptions at Scale**

Expand All @@ -250,8 +257,8 @@ service details should be chosen by the implementing service.

#### Source Stream

A Source Stream represents the sequence of events, each of which will trigger a
GraphQL execution corresponding to that event. Like field value resolution, the
A Source Stream is an _event stream_ representing a sequence of root values,
each of which will trigger a GraphQL execution. Like field value resolution, the
logic to create a Source Stream is application-specific.

CreateSourceEventStream(subscription, schema, variableValues, initialValue):
Expand All @@ -268,15 +275,15 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue):
- Let {field} be the first entry in {fields}.
- Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType,
field, variableValues)}.
- Let {fieldStream} be the result of running
- Let {sourceStream} be the result of running
{ResolveFieldEventStream(subscriptionType, initialValue, fieldName,
argumentValues)}.
- Return {fieldStream}.
- Return {sourceStream}.

ResolveFieldEventStream(subscriptionType, rootValue, fieldName, argumentValues):

- Let {resolver} be the internal function provided by {subscriptionType} for
determining the resolved event stream of a subscription field named
determining the resolved _event stream_ of a subscription field named
{fieldName}.
- Return the result of calling {resolver}, providing {rootValue} and
{argumentValues}.
Expand All @@ -287,17 +294,33 @@ operation type.

#### Response Stream

Each event in the underlying Source Stream triggers execution of the
subscription _selection set_ using that event as a root value.
Each event from the underlying Source Stream triggers execution of the
subscription _selection set_ using that event's value as the {initialValue}.

MapSourceToResponseEvent(sourceStream, subscription, schema, variableValues):

- Return a new event stream {responseStream} which yields events as follows:
- For each {event} on {sourceStream}:
- Let {response} be the result of running
{ExecuteSubscriptionEvent(subscription, schema, variableValues, event)}.
- Yield an event containing {response}.
- When {sourceStream} completes: complete {responseStream}.
- Let {responseStream} be a new _event stream_.
- When {sourceStream} emits {sourceValue}:
- Let {response} be the result of running
{ExecuteSubscriptionEvent(subscription, schema, variableValues,
sourceValue)}.
- If internal {error} was raised:
- Cancel {sourceStream}.
- Complete {responseStream} with {error}.
Comment on lines +307 to +309
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the note below, but I checked with graphql-js and this aligns. Request errors occur before field execution begins, which for subscriptions would be before source stream is created. Then each event through the subscription could produce field errors, but those are handled normally as { data, errors } and shouldn't complete the subscription. The only remaining scenario is some internal exceptional event.

I thought about continuing to leave this omitted, since it makes sense why it was omitted before, but I do agree that this is more clear about what to do in this non-normative scenario, and will make it more clear in the future if we ever decide to formally add "fatal error" as an error type

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not 100% clear on what kind of internal error is being envisioned here. It seems like it's clearly not a request or field error, as explicitly stated above.

Is this analogous to the case of the graphql-js call to asyncIterator.next() resulting in an exception? I would think that's covered below when discussing "completing in error" but maybe that's not the case?

Of course, anything could error => maybe it just means "something else went wrong?" And we are making how to handle that more explicit in the case of a subscription than we would with regard to our other operation, because it is helpful to clients to understand how this case is handled in the context of the response stream?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yaacovCR that last bit you said is exactly right. The vast amount of our spec assumes algorithms are pretty typical functions which you call with arguments and return a value, and the most significant complication is the introduction of async functions (which the spec really leaves mostly unstated), so the novelty here is the fact that this algorithm is not returning the result but returning a stream which will contain results. That's important because for a typical function we can just say if there is some exceptional error in any given algorithm that it should work its way up until something explicitly handles it or it fatals the whole operation - that's how error handling works (in some way or another) in just about every language and environment.

However it's a lot less obvious what should happen for a stream which encounters an exceptional error after that particular creation algorithm has completed - what is the right behavior? This tries to be explicit to avoid that ambiguity - we should make sure the resulting stream also ends with an error.

Is this analogous to the case of the graphql-js call to asyncIterator.next() resulting in an exception?

Yes - but specifically inside where it is being mapped. graphql-js handles field errors during execution of a single subscription payload, resulting in a {data: null, errors} normal payload, but other exceptions could trigger this codepath. That's really what's being described here

- Otherwise emit {response} on {responseStream}.
- When {sourceStream} completes normally:
- Complete {responseStream} normally.
- When {sourceStream} completes with {error}:
- Complete {responseStream} with {error}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this wording imply that when the {responseStream} completes with error, it must do so in the same manner as when the {sourceStream} does? We have a request at graphql-js for emitting well formated GraphQLErrors instead of throwing. I would read the specification permissively, as in the manner in which the {responseStream} completes with said error is up to the implementation, allowing that change.

Once could argue instead/additionally that it belongs within the transport protocol specification.

Tagging @leebyron @enisdenjo @aseemk from the linked issue.

Copy link
Member

@enisdenjo enisdenjo Oct 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the SSE perspective, and in the GraphQL over SSE RFC, errors are emitted over the stream as an event.

This is because browser native EventSource will not report back to the user non-200 responses in detail - it'll just say "error" - leaving the user in the dark. Furthermore, if the error occurs after a while during the stream, the connection is already established and the only way to report an issue is in form of an event.

Same thing applies for WebSockets.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that for normal errors in sourceStream we should capture them and emit an { errors: [ ... ] } payload rather than terminating with an error.

Suggested change
- Complete {responseStream} with {error}.
- Let {errors} be a list containing {error}.
- Let {response} be an unordered map containing {errors}.
- Emit {response} on {responseStream}.
- Complete {responseStream} normally.

However, this is a change in behavior (not an editorial change like the rest of this PR), so perhaps this should be raised after this editorial only change is merged - see #1127 for a suitable follow-up PR.

- When {responseStream} is cancelled:
- Cancel {sourceStream}.
- Complete {responseStream} normally.
- Return {responseStream}.

Note: Since {ExecuteSubscriptionEvent()} handles all _field error_, and _request
error_ only occur during {CreateSourceEventStream()}, the only remaining error
condition handled from {ExecuteSubscriptionEvent()} are internal exceptional
errors not described by this specification.

ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue):

Expand All @@ -317,9 +340,9 @@ Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to
#### Unsubscribe

Unsubscribe cancels the Response Stream when a client no longer wishes to
receive payloads for a subscription. This may in turn also cancel the Source
Stream. This is also a good opportunity to clean up any other resources used by
the subscription.
receive payloads for a subscription. This in turn also cancels the Source
Stream, which is a good opportunity to clean up any other resources used by the
subscription.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a concrete change from "may" cancel the source stream to "does" cancel the source stream.

We create the source stream in the CreateSourceEventStream algorithm, so it is definitely our responsibility to also ensure it is released (completed). This new wording covers this, the old wording was essentially incorrect. I support this change.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, the previous wording was too loose. Ultimately this is not observable behavior from an API consumer's point of view, so also not spec enforceable, but I think this language is much more clear. If a resulting stream consumer cancels, then it's certainly the case that the source stream must be made aware of that.

Extra unnecessary detail in case if ever comes up: if an implementer doesn't want to cancel the source stream, they should be using an intermediate like tee


Unsubscribe(responseStream):

Expand Down
Loading