Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue with env variable subsitution in deployment names #2871

Open
dcmshi opened this issue Jul 16, 2024 · 1 comment
Open

issue with env variable subsitution in deployment names #2871

dcmshi opened this issue Jul 16, 2024 · 1 comment
Labels
kind/bug Something isn't working

Comments

@dcmshi
Copy link

dcmshi commented Jul 16, 2024

What happened?
I'm migrating a devspace.yaml from version 5.x to 6.x, and one of the major differences is the difference in specifying the deployment names.

6.x

deployments:
  backend:
    helm:
...

vs

5.x

deployments:
- name: backend
  helm:
...

previously the deployment name was specified via key/val, and now it's changed to directly add the name under deployments. With the new changes we run in to a deployments regex validation error. However when I verify using a custom command to echo the environment variables I can confirm that they are getting set, but the validation regex looks like it's still running on the non-templated value.

// deployment name fails foreseeably because of invalid values like $, {, } not actually expanding the PG_NAME variable
-> devspace dev
info Using namespace 'dshi'
info Using kube context 'sandbox'
fatal deployments.${PG_NAME} has to match the following regex: ^(([a-z0-9][a-z0-9\-]*[a-z0-9])|([a-z0-9]))$

// custom command properly prints env vars
-> devspace run print_pg
info Using namespace 'dshi'
info Using kube context 'sandbox'
kafka-segment-sink-pg

What did you expect to happen instead?
previously this dynamic deployment naming worked in 5.x, I'm not sure if there is plans to keep supporting this but I would expect the behaviour to be the deployment name to be templated directly and pass this regex validation. We would really appreciate this backwards compatibility because it allows us to manage our CI deployments by prepending prefixes to the pod names. Thank you!

How can we reproduce the bug? (as minimally and precisely as possible)

devspace dev
devspace run print_pg

My devspace.yaml:

version: v2beta1
images:
  app:
    image: "docker-local-dev.jfrog.prodigygame.org/${PROJECT_NAME}"
    injectRestartHelper: true
    tags:
      - ${DEVSPACE_GIT_COMMIT}-####
    kaniko:
      initImage: docker-virtual.jfrog.prodigygame.org/alpine
      cache: true
      snapshotMode: full
      insecure: false
      resources:
        requests:
          memory: 2.5Gi
          cpu: 1.5
        limits:
          memory: 2.5Gi
          cpu: 1.5
      pullSecret: artifactorycred-dev
      annotations:
        sidecar.istio.io/inject: "false"
      args:
        - GIT_SHA=${DEVSPACE_GIT_COMMIT}
        - --target=production
      nodeSelector:
        nodegroup: cicd
  app-extended:
    image: "docker-local-dev.jfrog.prodigygame.org/${PROJECT_NAME}-extended"
    injectRestartHelper: false
    tags:
      - ${DEVSPACE_GIT_COMMIT}-####
    kaniko:
      initImage: docker-virtual.jfrog.prodigygame.org/alpine
      cache: true
      snapshotMode: full
      insecure: false
      resources:
        requests:
          memory: 2.5Gi
          cpu: 1.5
        limits:
          memory: 2.5Gi
          cpu: 1.5
      pullSecret: artifactorycred-dev
      annotations:
        sidecar.istio.io/inject: "false"
      args:
        - GIT_SHA=${DEVSPACE_GIT_COMMIT}
        - --target=dev
      nodeSelector:
        nodegroup: cicd
deployments:
  ${DEPLOYED_PROJECT_NAME}-kafka:
    helm:
      chart:
        name: helm-virtual/kafka
        version: 14.3.0
#      wait: true
      values:
        image:
          registry: docker-virtual.jfrog.prodigygame.org
          tag: 2.6.0
        persistence:
          enabled: false
        advertisedListeners:
          - INSIDE://${DEPLOYED_PROJECT_NAME}-kafka:9092
          - OUTSIDE://localhost:9094
        listeners:
          - INSIDE://:9092
          - OUTSIDE://:9094
        listenerSecurityProtocolMap: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
        interBrokerListenerName: INSIDE
        resources:
          limits:
            memory: 2Gi
          requests:
            memory: 1.2Gi
            cpu: 100m
        nodeSelector:
          nodegroup: cicd
        deleteTopicEnable: true
        autoCreateTopicsEnable: true
        zookeeper:
          resources:
            limits:
              memory: 300Mi
            requests:
              memory: 150Mi
              cpu: 100m
          persistence:
            enabled: false
          nodeSelector:
            nodegroup: cicd
  ${PG_NAME}:
    helm:
#      wait: true
      chart:
        name: postgresql
        repo: https://jfrog.prodigygame.org/artifactory/helm-virtual
        version: 10.11.1
      values:
        fullnameOverride: ${PG_NAME}
        image:
          tag: 12.4.0
        persistence:
          enabled: false
        postgresqlDatabase: ${PROJECT_NAME}
        postgresqlPassword: ${PROJECT_NAME}
        postgresqlUsername: ${PROJECT_NAME}
        replication:
          enabled: false
        primary:
          nodeSelector:
            nodegroup: cicd
        resources:
          limits:
            memory: 1Gi
          requests:
            memory: 256Mi
            cpu: 150m
  ${DEPLOYED_PROJECT_NAME}-segmentio:
    helm:
#      componentChart: true
      values:
        containers:
          - image: docker-virtual.jfrog.prodigygame.org/segmentio
        nodeSelector:
          nodegroup: cicd
        service:
          ports:
            - port: 8765
  ${DEPLOYED_PROJECT_NAME}:
    helm:
      displayOutput: true
      chart:
        name: prodigy-application
        repo: https://jfrog.prodigygame.org:443/artifactory/helm-virtual
        version: 0.5.4
      valuesFiles:
        - ./values.yaml
      values:
        global:
          datadog:
            enabled: false
          nodeSelector:
            nodegroup: cicd
          environment: &shared-env
            ENV: local
            DEBUG: True
            SERVICE_NAME: ${PROJECT_NAME}
            # Database
            DB_WRITE_PORT: 5432
            DB_WRITE_USER: ${PROJECT_NAME}
            DB_WRITE_NAME: ${PROJECT_NAME}
            DB_WRITE_PASS: ${PROJECT_NAME}
            DB_WRITE_HOST: ${PG_NAME}
            # Kafka
            KAFKA_BOOTSTRAP_SERVERS: ${DEPLOYED_PROJECT_NAME}-kafka:9092
            KAFKA_CLIENT_ID: "${DEPLOY_PREFIX}${PROJECT_NAME}"
            KAFKA_GROUP: "${DEPLOY_PREFIX}${PROJECT_NAME}-consumer"
            KAFKA_SECURE: False
            USE_CONFLUENT_KAFKA: True
            # Identity and OIDC
            IDENTITY_DOMAIN: sso.prodigygame.org
            OIDC_RP_CLIENT_ID: 1b0bbeedd51f8eb957a31e8be2f8b1f04d091ac76549925e4bd8092532ff8802
            OIDC_RP_CLIENT_SECRET: 9ff4dfcd542abfe10e3d8d64038dae0b6e1b537529b34c293b26f67b010160fb
            # Segment
            SEGMENT_HOST: http://${DEPLOYED_PROJECT_NAME}-segmentio:8765
            SEGMENT_MAX_QUEUE_SIZE: 10000
            SEGMENT_MAX_QUEUE_RETRIES: 10
            SEGMENT_MAX_HTTP_RETRIES: 10
            TEST_WRITE_KEY: "ABC123"
            PAYMENTS_SEGMENT_WRITE_KEY: "12345"
            IDENTITY_SEGMENT_WRITE_KEY: "67890"
            COACHING_SEGMENT_WRITE_KEY: "34945"
            GRAPHQL_SEGMENT_WRITE_KEY: "4201337"
            DATABRICKS_SEGMENT_WRITE_KEY: "456123"
            DD_TRACE_ENABLED: False
        deployments:
          tools:
            fullImage: "docker-local-dev.jfrog.prodigygame.org/${PROJECT_NAME}-extended"
          admin:
            fullImage: "docker-local-dev.jfrog.prodigygame.org/${PROJECT_NAME}"
          consumer:
            fullImage: "docker-local-dev.jfrog.prodigygame.org/${PROJECT_NAME}"
        jobs:
          db-migration:
            fullImage: "docker-local-dev.jfrog.prodigygame.org/${PROJECT_NAME}-extended"
            environment:
              <<: *shared-env
dev:
  consumer:
    labelSelector:
      app: ${DEPLOYED_PROJECT_NAME}-consumer
    container: ${DEPLOYED_PROJECT_NAME}-consumer
    sync:
      - path: .:/opt/
        excludePaths:
        - .devspace
        - .git/
        - .venv/
        - __pycache__
        - README.md
        - .dockerignore
        - .github
        - .gitignore
        - .mypy_cache
        - Dockerfile
        - Makefile
        - .makefile.inc
        - devspace.yaml
        - .pre-commit-config.yaml
        - docker-compose.yml
        - docs/
        - values.yaml
        - app/static/
        onUpload:
          restartContainer: true
  admin:
    labelSelector:
      app:
        ${DEPLOYED_PROJECT_NAME}-admin
    container: ${DEPLOYED_PROJECT_NAME}-admin
    sync:
      - path: .:/opt/
        excludePaths:
        - .devspace
        - .git/
        - .venv/
        - __pycache__
        - README.md
        - .dockerignore
        - .github
        - .gitignore
        - .mypy_cache
        - Dockerfile
        - Makefile
        - .makefile.inc
        - devspace.yaml
        - .pre-commit-config.yaml
        - docker-compose.yml
        - docs/
        - values.yaml
        - app/static/
        onUpload:
          restartContainer: true
profiles:
  - name: candidate
    description: "used to build candidate images"
    patches:
      - op: replace
        path: /images/app/injectRestartHelper
        value: false
      - op: replace
        path: /images/app/image
        value: docker-virtual.jfrog.prodigygame.org/${PROJECT_NAME}
      - op: replace
        path: /images/app/tags
        value:
          - ${GITHUB_SHA}
      - op: replace
        path: /images/build/kaniko/pullSecret
        value: artifactorycred
      - op: replace
        path: /images/build/kaniko/options/target
        value: production
      - op: replace
        path: /images/build/kaniko/options/buildArgs/GIT_SHA
        value: ${DEVSPACE_GIT_COMMIT}
      - op: replace
        path: /images/app-extended/build/disabled
        value: true
  - name: preview
    description: "used to deploy preview environments"
    patches:
      - op: replace
        path: deployments[3].helm.wait
        value: true
      - op: replace
        path: /images/app/injectRestartHelper
        value: false
      - op: replace
        path: /images/app/tags
        value:
          - ${GITHUB_SHA}
      - op: replace
        path: /images/app/build/kaniko/pullSecret
        value: artifactorycred
      - op: replace
        path: /images/app/build/kaniko/options/target
        value: production
      - op: replace
        path: /images/app/build/kaniko/options/buildArgs/GIT_SHA
        value: ${DEVSPACE_GIT_COMMIT}
      - op: replace
        path: /images/app-extended/injectRestartHelper
        value: false
      - op: replace
        path: /images/app-extended/tags
        value:
          - ${GITHUB_SHA}
      - op: replace
        path: /images/app-extended/build/kaniko/pullSecret
        value: artifactorycred

  - name: interactive
    description: "used to investigate containers crashing at startup"
    patches:
      - op: add
        path: dev.interactive
        value:
          defaultEnabled: true
      - op: add
        path: images.app.entrypoint
        value:
          - sleep
          - "9999999999"
commands:
  print_project:
    command: echo "${DEPLOYED_PROJECT_NAME}"
  print_pg:
    command: echo "${PG_NAME}"
  test:
    command: devspace enter -l "app=${DEPLOYED_PROJECT_NAME}-tools" -c ${DEPLOYED_PROJECT_NAME}-tools python app/manage.py test
  precoverage:
    command: devspace enter -l "app=${DEPLOY_NAME}" -c ${DEPLOY_NAME} coverage run app/manage.py test
  coveragereport:
    command: devspace enter -l "app=${DEPLOY_NAME}" -c ${DEPLOY_NAME} coverage report
  focus:
    command: devspace enter -l "app=${DEPLOY_NAME}" -c ${DEPLOY_NAME} python app/manage.py test $@
  sh:
    command: devspace enter -l "app=${DEPLOY_NAME}" -c ${DEPLOY_NAME} bash
  shell:
    command: devspace enter -l "app=${DEPLOY_NAME}" -c ${DEPLOY_NAME} python app/manage.py shell_plus --ipython
  migrate:
    command: devspace enter -l "app=${DEPLOYED_PROJECT_NAME}-tools" -c ${DEPLOYED_PROJECT_NAME}-tools python app/manage.py migrate $@
  showmigrations:
    command: devspace enter -l "app=${DEPLOY_NAME}" -c ${DEPLOY_NAME} python app/manage.py showmigrations
  seed:
    command: devspace enter -l "app=${DEPLOY_NAME}" -c ${DEPLOY_NAME} -- python app/manage.py loaddata peercomparisoninsight studentcomparisoninsight
  makemigrations:
    command: devspace enter -l "app=${DEPLOY_NAME}" -c ${DEPLOY_NAME} python app/manage.py makemigrations
  debug:
    command: devspace attach -l "app=${DEPLOY_NAME}" -c ${DEPLOY_NAME}

vars:
  DEPLOY_PREFIX:
    source: env
    default: ""
  PROJECT_NAME:
    source: env
    default: kafka-segment-sink
  DEPLOYED_PROJECT_NAME:
    source: env
    default: ${DEPLOY_PREFIX}${PROJECT_NAME}
  DEPLOY_NAME:
    source: env
    default: "${DEPLOYED_PROJECT_NAME}-consumer"
  PG_NAME:
    source: env
    default: "${DEPLOYED_PROJECT_NAME}-pg"

Local Environment:

  • DevSpace Version: devspace version 6.3.12
  • Operating System: mac
  • ARCH of the OS: ARM64
    Kubernetes Cluster:
  • Cloud Provider: aws
  • Kubernetes Version: Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-14T09:53:42Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"darwin/amd64"}
    Kustomize Version: v5.0.1
    Server Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.9-eks-036c24b", GitCommit:"f75443c988661ca0a6dfa0dc01ea82dd42d31278", GitTreeState:"clean", BuildDate:"2024-04-30T23:54:04Z", GoVersion:"go1.21.9", Compiler:"gc", Platform:"linux/arm64"}

Anything else we need to know?

@dcmshi dcmshi added the kind/bug Something isn't working label Jul 16, 2024
@nethi
Copy link

nethi commented Oct 25, 2024

Is there another way to achieve this in 6.x ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants