Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix upstream status flicker and constant status updates #10384
fix upstream status flicker and constant status updates #10384
Changes from all commits
2f9ce6d
a87a6ad
5060f08
3a9bb33
91add95
2b464a9
2bc1bac
c73cbdd
a13ae48
409520e
b908568
411c838
c6fa36c
d93ee03
82b31ff
89a427a
31adabd
30622cd
82cd654
edf781d
6def00f
d061304
0f2a5f6
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so I understand, is this block of code intended to do something similar to https://github.com/solo-io/solo-kit/blob/main/pkg/api/v1/clients/kube/resource_client.go#L477 where we are converting the kubeUpstream?
Why do we just do this for Upstreams and not for all other gloo resources?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes exactly, it is a non-solo-kit bound way of doing the conversion from kube type to solo-kit type
We do it for the only other gloo type, AuthConfigs, below at https://github.com/solo-io/gloo/pull/10384/files#diff-c658efd653154e7eb520a5ea6c646cdd45af7ed0ac56224360affdf481a562c4R719
Other than that, RLCs are a skv2 type and don't have the same challenges
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do the other resource types below not need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see also: https://github.com/solo-io/gloo/pull/10384/files#r1856751038
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: seems frail to assume certain behavior based on the existence of an edge proxy, but this logic will probably be ripped out soon anyway so it's ok (side note: why don't the edge and kube gw translators each read/write their own statuses that don't overlap?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with your statement but I actually think it doesn't apply here. This check is effectively saying that this syncer is not going to mark status as Accepted if I'm not also going to translate that same resource. The presence of an edge proxy is just the mechanism it uses to know if it will also do translation on that resource.
This is probably what we need to do if this stays around long term. We have the namespaced status concept that could be applied to this, although it 1. is a bit a misuse and 2. will tie us further into legacy solo-kit types
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think we can do anything about this right now, but just a note: do you think this will potentially cause more test flakes? waiting for resource acceptance means the controller has seen the resource (and it's in the input snapshot). just waiting for the resource to exist on the cluster might not give the same guarantee?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree, it's not great.
But if our system is eventually consistent, needing to wait for something to be in the internal snapshot (completely opaque to users) is a problem and smell anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👏