You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
After our EKS was upgraded to 1.21, we saw annotations like the following appear in api server audit logs in AWS, for service accounts that Splunk Connect pods are using:
subject: system:serviceaccount:<namespace here>:<sa name here>, seconds after warning threshold: 3989
It would appear that there is 90d grace period, after which tokens will be rejected.
It looks like the solarwinds snap agents needs to use a later client SDK version, or is there a workaround?
What you expected to happen:
More recent k8s client sdk was used so that the tokens would be refreshed. At some kube version when AWS will change to the default 1h tokens, the pods will get errors from api server after an hour (unless they are restarted earlier, as that would refresh the token I think as well).
How to reproduce it (as minimally and precisely as possible):
Install or upgrade EKS to 1.21 and check EKS cluster api server audit logs with this query:
fields @timestamp
| filter @logStream like /kube-apiserver-audit/
| filter @message like /seconds after warning threshold/
| parse @message "subject: *, seconds after warning threshold:*\"" as subject, elapsedtime
What happened:
After our EKS was upgraded to 1.21, we saw annotations like the following appear in api server audit logs in AWS, for service accounts that Splunk Connect pods are using:
This is due to changes in token expiry in K8s 1.21 as described here:
https://docs.aws.amazon.com/eks/latest/userguide/service-accounts.html#identify-pods-using-stale-tokens
It would appear that there is 90d grace period, after which tokens will be rejected.
It looks like the solarwinds snap agents needs to use a later client SDK version, or is there a workaround?
What you expected to happen:
More recent k8s client sdk was used so that the tokens would be refreshed. At some kube version when AWS will change to the default 1h tokens, the pods will get errors from api server after an hour (unless they are restarted earlier, as that would refresh the token I think as well).
How to reproduce it (as minimally and precisely as possible):
Install or upgrade EKS to 1.21 and check EKS cluster api server audit logs with this query:
based on: https://docs.aws.amazon.com/eks/latest/userguide/service-accounts.html#identify-pods-using-stale-tokens
Anything else we need to know?:
Environment:
Kubernetes version (use kubectl version): 1.21
Image tag: solarwinds/solarwinds-snap-agent-docker:4.4.0-4.3.0.1156 (latest)
The text was updated successfully, but these errors were encountered: