You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am evaluating moving some of my Terraform-managed projects to Pulumi. In my testing I exercise certain scenarios in which Terraform behaves in a certain way, that I would like to maintain. One such example is Terraform's ability and default behaviour in enforcing complete 1:1 agreement of what's defined in code vs the real state of Kubernetes state: any manual edits on k8s objects get marked as adrift the next time I run terraform plan or apply. This includes any change on for example k8s Deployments, including the ones which were not originally created by Terraform, nor have they ever been managed by it. I'm looking to understand if the same is possible and if yes, how, using Pulumi.
Example:
I create the following k8s Deployment in Terraform:
and add a new environment variable just after LOG_LEVEL:
- name: FOOvalue: bar
running terraform apply on the same HCL as defined in the first step will present a plan in which the FOO env var gets removed, since it's not one recognised, managed by Terraform
I would like to maintain this same behaviour using Pulumi, however, I have not been able to achieve this: defining and deploying the same Deployment using Pulumi TS, when I run pulumi up --refresh after having made the manual kubectl edit, it will not propose to remove FOO even if it isn't one that's defined in the TS code. This is different if I add FOO at the front of the Deployment env vars, since Pulumi at that point will recognise env[0] which it does manage, has a key and value that's different from what's in TS. In this case, it does revert the manual change. What I'm looking for, though, is a way to tell Pulumi to make the TS code match the k8s state 100%, no exceptions.
Kubernetes is designed for multi-party authoring of objects, e.g. where one party (or "manager") authors the bulk of the spec, another party (a controller) authors the status block, and another party (an auto-scaler) authors the replicas field. The notion of one party having total control of the object is somewhat counter to its design.
The schemas of the Kubernetes resource types contain information about how to merge the intentions of different parties. For example, the pod's env vars are merged across all parties, and the ownership is tracked by the server, so that each party gets "replace" semantics. If, for example, your program was setting FOO, then later you removed FOO from your program, it would be cleared out, while any vars set by other parties would survive.
When it comes to drift detection, the scope is with respect to the fields that your program owns (by setting an intentional value).
Could you outline a specific case of drift that you'd like Pulumi to remediate?
Hello,
I am evaluating moving some of my Terraform-managed projects to Pulumi. In my testing I exercise certain scenarios in which Terraform behaves in a certain way, that I would like to maintain. One such example is Terraform's ability and default behaviour in enforcing complete 1:1 agreement of what's defined in code vs the real state of Kubernetes state: any manual edits on k8s objects get marked as adrift the next time I run
terraform plan
orapply
. This includes any change on for example k8sDeployments
, including the ones which were not originally created by Terraform, nor have they ever been managed by it. I'm looking to understand if the same is possible and if yes, how, using Pulumi.Example:
Deployment
in Terraform:Deployment
viakubectl
and add a new environment variable just after
LOG_LEVEL
:terraform apply
on the same HCL as defined in the first step will present a plan in which theFOO
env var gets removed, since it's not one recognised, managed by TerraformI would like to maintain this same behaviour using Pulumi, however, I have not been able to achieve this: defining and deploying the same
Deployment
using Pulumi TS, when I runpulumi up --refresh
after having made the manualkubectl edit
, it will not propose to removeFOO
even if it isn't one that's defined in the TS code. This is different if I addFOO
at the front of theDeployment
env vars, since Pulumi at that point will recogniseenv[0]
which it does manage, has a key and value that's different from what's in TS. In this case, it does revert the manual change. What I'm looking for, though, is a way to tell Pulumi to make the TS code match the k8s state 100%, no exceptions.Things I have looked into (without success) :
replaceOnChanges: ["spec.template.spec.containers.env"]
Thanks in advance, and sorry if this is documented somewhere
The text was updated successfully, but these errors were encountered: