-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backend Connectivity Issues #470
Comments
I tested around and installed Envoy Gateway and some more GatewayAPI / Ingress controllers and they all worked, so this is probably not Envoy's fault. |
mhofstetter
added a commit
to mhofstetter/proxy
that referenced
this issue
Mar 26, 2024
Currently, interaction with BPF maps via syscalls (open, lookup) might result in log messages of the following form, where the error detail is `success`: ``` [info][filter] [cilium/conntrack.cc:229] cilium.bpf_metadata: IPv4 conntrack map global lookup failed: Success ``` This is due to the fact that BPF maps are accessed in the starter process. Hence, the syscalls are also executed in this separate process and the variable `errno` is never set in the Envoy process where the log is written.. Therefore, this commit fixes the error propagation by setting the variable `errno` after retrieving the response from the privileged client doing the call to the starter process. Fixes: cilium#315 Fixes: cilium#470 Signed-off-by: Marco Hofstetter <[email protected]>
mhofstetter
added a commit
to mhofstetter/proxy
that referenced
this issue
Mar 26, 2024
Currently, interaction with BPF maps via syscalls (open, lookup) might result in log messages of the following form, where the error detail is `success`: ``` [info][filter] [cilium/conntrack.cc:229] cilium.bpf_metadata: IPv4 conntrack map global lookup failed: Success ``` This is due to the fact that BPF maps are accessed in the starter process. Hence, the syscalls are also executed in this separate process and the variable `errno` is never set in the Envoy process where the log is written.. Therefore, this commit fixes the error propagation by setting the variable `errno` after retrieving the response from the privileged client doing the call to the starter process. Fixes: cilium#315 Fixes: cilium#470 Signed-off-by: Marco Hofstetter <[email protected]>
mhofstetter
added a commit
to mhofstetter/proxy
that referenced
this issue
Mar 27, 2024
Currently, interaction with BPF maps via syscalls (open, lookup) might result in log messages of the following form, where the error detail is `success`: ``` [info][filter] [cilium/conntrack.cc:229] cilium.bpf_metadata: IPv4 conntrack map global lookup failed: Success ``` This is due to the fact that BPF maps are accessed in the starter process. Hence, the syscalls are also executed in this separate process and the variable `errno` is never set in the Envoy process where the log is written.. Therefore, this commit fixes the error propagation by setting the variable `errno` after retrieving the response from the privileged client doing the call to the starter process. Fixes: cilium#315 Fixes: cilium#470 Signed-off-by: Marco Hofstetter <[email protected]>
github-merge-queue bot
pushed a commit
that referenced
this issue
Apr 2, 2024
Currently, interaction with BPF maps via syscalls (open, lookup) might result in log messages of the following form, where the error detail is `success`: ``` [info][filter] [cilium/conntrack.cc:229] cilium.bpf_metadata: IPv4 conntrack map global lookup failed: Success ``` This is due to the fact that BPF maps are accessed in the starter process. Hence, the syscalls are also executed in this separate process and the variable `errno` is never set in the Envoy process where the log is written.. Therefore, this commit fixes the error propagation by setting the variable `errno` after retrieving the response from the privileged client doing the call to the starter process. Fixes: #315 Fixes: #470 Signed-off-by: Marco Hofstetter <[email protected]>
mhofstetter
added a commit
to mhofstetter/proxy
that referenced
this issue
Apr 3, 2024
Currently, interaction with BPF maps via syscalls (open, lookup) might result in log messages of the following form, where the error detail is `success`: ``` [info][filter] [cilium/conntrack.cc:229] cilium.bpf_metadata: IPv4 conntrack map global lookup failed: Success ``` This is due to the fact that BPF maps are accessed in the starter process. Hence, the syscalls are also executed in this separate process and the variable `errno` is never set in the Envoy process where the log is written.. Therefore, this commit fixes the error propagation by setting the variable `errno` after retrieving the response from the privileged client doing the call to the starter process. Fixes: cilium#315 Fixes: cilium#470 Signed-off-by: Marco Hofstetter <[email protected]>
jrajahalme
pushed a commit
that referenced
this issue
Apr 3, 2024
Currently, interaction with BPF maps via syscalls (open, lookup) might result in log messages of the following form, where the error detail is `success`: ``` [info][filter] [cilium/conntrack.cc:229] cilium.bpf_metadata: IPv4 conntrack map global lookup failed: Success ``` This is due to the fact that BPF maps are accessed in the starter process. Hence, the syscalls are also executed in this separate process and the variable `errno` is never set in the Envoy process where the log is written.. Therefore, this commit fixes the error propagation by setting the variable `errno` after retrieving the response from the privileged client doing the call to the starter process. Fixes: #315 Fixes: #470 Signed-off-by: Marco Hofstetter <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I've been having issues with the Envoy Gateway and Ingress with cilium for some months now. I'm not sure if it's cilium's or envoy's fault, envoy logs "conntrack lookup failed: Success" on every request (it always returns an error because it does not receive a response from the backend) but it's just an info log.
My issue in cilium/cilium with infos etc.:
cilium/cilium#29406
The text was updated successfully, but these errors were encountered: