-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dekaf: Get UI working with new materialization types #1270
Comments
From my vantage point, Dekaf connectors should supply data to the same columns used for their standard counterparts. If the Questions
ReferenceThe client utilizes the following, non-standard
The client utilizes the following, non-standard
NOTE: The use of the data provided by many of these columns is expected and self-explanatory. Standard columns defined within the internal Supabase table model and optional columns are not mentioned above. |
Alright, now I think we have a pretty good idea of how this will look. For some background, Dekaf is a new service we're exposing that will let users read their collections' data as if they were served by Kafka. This lets us integrate with a whole bunch of services for the cost of one integration, as opposed to having to write materialization connectors targetting each one. Because Kafka is designed around a client/server architecture, and we're emulating the server side, we can't easily package Dekaf as a regular materialization connector. Instead what we've chosen to do is expose it as a standalone service running in each data-plane. Users of Dekaf will connect to it like they would a regular Kafka broker, and will be presented with an environment that looks like regular Kafka, with their existing collections showing up as kafka topics. Rather than just exposing every one of your collections as a separate topic, we decided that it would make more sense to model Dekaf usage around the same concept of materializations that we already use to model all of our other materialization connectors. This is useful for a few reasons:
So, in order to make this work, we added a new "kind" of materialization. Currently, we have local- and connector- type materializations, so it was pretty straightforward to add dekaf as a new kind. Practically, a dekaf materialization looks something like this:
These will be represented in the database just like existing materializations, but a few important differences. Unlike connector-type materializations, Dekaf doesn't use Docker, so the current behavior of looking up the Rows in This has two consequences:
As far as I know, everything else should be the same. Support for the new materialization endpoint type has been added and there is test coverage of support for this new syntax, but other than that Dekaf itself doesn't actually support it yet, so you're on the bleeding edge. Given that, it would be super nice if we could get support for these new materialization types out behind a feature flag, as that would make testing the whole process end to end much easier once I finish the work on Dekaf to support being configured and authenticated by a materialization. One really nice nice-to-have here would be the addition of some annotation like
Hmm, I suspect the answer is yes based on how I've seen the UI fill in defaults only once a particular field is edited, but to be clear I would expect this to work the same way all the other connectors do wrt support for
The story here isn't great right now. I'm currently working on support for e2e Dekaf testing, and part of that will be adding Dekaf to the
Today, they are pointing their Kafka consumers at |
UI portion of work for: estuary/flow#1622
Notes:
Need to handle empty materialization endpoint configs (this should work as is)
We'll have a hard coded (I think) row in connector tags to represent this
The endpoint will contain a nested property
dekaf
inendpointconfig
that will be emptyRequirements:
dekaf
.variant
tag properly.endpoint:connector:image
is normal but needs to supportendpoint:dekaf:something
The text was updated successfully, but these errors were encountered: