-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
THREESCALE-10582 fix integration of upstream connection policy with camel policy #1443
THREESCALE-10582 fix integration of upstream connection policy with camel policy #1443
Conversation
@@ -202,10 +202,8 @@ ETag: foobar | |||
--- error_code: 200 | |||
--- error_log env | |||
proxy request: CONNECT test-upstream.lvh.me:$TEST_NGINX_RANDOM_PORT HTTP/1.1 | |||
--- user_files fixture=tls.pl eval | |||
--- error_log env |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there cannot be two --- error_log
directives. Only one of them is being applied.
@@ -1,6 +1,8 @@ | |||
use lib 't'; | |||
use Test::APIcast::Blackbox 'no_plan'; | |||
|
|||
require("http_proxy.pl"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one comment not related to this PR: I am thinking about running one http proxy (tinyproxy for example) in the docker compose env we run for the e2e tests and get rid of this custom, perl implemented, proxy we are using. Currently, in addition to the apicast dev container, there is a redis one.
Same can be done for the camel proxy.
For backend, upstream, it is very handy to do some asserts so, "outsourcing" upstream may not be convenient.
wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there are a few tests that watch the output of this proxy module, ie headers. So we will need to rewrite a few tests, but on the flip side we get more reliable test (may be), ie avoid the case where e2e tests run before the proxy module is ready
We will need at least 4 containers for both dev
and ci
image.
- Normal proxy
- A camel proxy
- A camel proxy support HTTPS
- A proxy that accept BasicAuth
I had a similar problem with #1414 where I need 4 more containers just to be able to test redis sentinel.
As long as it works well and is easy to maintain, I really don't mind adding additional pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we have a TODO list. We can implement step by step and remove the perl based proxy when it is fully being replaced by other proxies.
PS: dev
andci
image are currently the same one
a955607
to
2673754
Compare
- skip_https_connect as request attribute - http_proxy not used attribute http_backend removed
54d26f7
to
b705c0f
Compare
all comments addressed |
LGTM! |
What
Fixes: https://issues.redhat.com/browse/THREESCALE-10582
Upstream timeouts don't work with Camel Service. Actually upstream connection policy is not working when
https_proxy
is being used (either with env vars or policy).This PR fixes the integration of upstream connection with any use case for
https_proxy
.It was considered adding connection options to the proxy policy and camel policy. Mainly because "Upstream connection" policy is referring to "upstream", which can be confusing when a proxy is being used, as the connection to upstream backend is no longer made by APIcast. Instead, APIcast creates a connection to the proxy. So the Upstream connection opts should, ideally, only apply to connections to backend "upstream".
We decided to apply upstream connection policy to any "upstream" connection the APIcast make initiate, either a proxy or a actual upstream backend. That way, implementation wise it is easier. No need to add extended connection parameters to the proxy policies. Furthermore, if users are using Upstream connection policy together with
http_proxy
, the configuration still applies. With a new connection parameters in the proxy policies, this last use case would be broken. Additionally, if connection opts are added to the policies as optional params, we would need to add new env vars as well for the use case where proxies are configured via env vars. Too complex just to stick "upstream" concept to the actual backend (api_backend
in the service configuration). Instead, upstram connection policy applies to any "upstream" connection APIcast does, regardless of being it a proxy or backend upstream.Verification Steps
Upstream connection integration with
https_proxy
camel proxycd dev-environments/camel-proxy make certs
https_proxy
use case: APIcast --> camel proxy --> upstream (TLS)This env uses as
api_backend
the real envhttps://echo-api.3scale.net:443
. My roundtrip latency is ~400ms.Let's start with timeouts set to 1 sec and the request should be accepted.
Run environment
The request should be accepted (
200 OK
), as the connection timeouts should not be exceeded.curl --resolve https-proxy.example.com:8080:127.0.0.1 -v "http://https-proxy.example.com:8080/?user_key=123"
Now, let's lower the timeouts threshold to something like 100ms that should be exceeded because the upstream is far away.
Stop the gateway
Restore
apicast-config.json
fileApply 100ms timeouts
Run environment
The request should fail (
502 Bad Gateway
), as the connection timeouts should be exceeded.curl --resolve https-proxy.example.com:8080:127.0.0.1 -v "http://https-proxy.example.com:8080/?user_key=123"
The logs should show the following line
Clean the env before starting next step
Upstream connection integration with
https_proxy
with proxy policy (tinyproxy)https_proxy
dev environment setupcd dev-environments/https-proxy-upstream-tlsv1.3 make certs
https_proxy
use case: APIcast --> tiny proxy --> upstream (TLS)Timeouts set to 0.1 sec
Run environment
The request should be accepted (200 OK), as the connection timeouts should not be exceeded.
curl --resolve get.example.com:8080:127.0.0.1 -i "http://get.example.com:8080/?user_key=123"
Now, let's simulate some network latency using docker containers with Traffic control. We will add 200ms of latency to the container running
socat
between the proxy and the backend upstream. It is calledexample.com
service.Stop the gateway
We are going to modify network-related stuff, so the NET_ADMIN capability is needed.
Run environment with the new config
install the
tc
(traffic control) commandAdd 200ms latency to the outbound traffic of
example.com
service.The request should be rejected (
503 Service Temporarily Unavailable
), as the connection timeouts should not be exceeded.curl --resolve get.example.com:8080:127.0.0.1 -i "http://get.example.com:8080/?user_key=123"
The logs should show the following line