This repository is meant to be a base to install Kubernetes, Helm and begin running applications on your machine and remote in the Google Cloud Engine (gce). The overall structure of the repo is:
- /applications/ - Individual repos are checked out which contain Helm charts and tools needed to manage individual applications and containers.
- /containers/ - Individual repos are checked out, each repo is responsible for a single container.
- sv - sv command allows us to easily start/stop/deploy applications locally and to the gce.
- The goal of this repository is for multiple departments to eventually share this repo for their working environment to ensure similar containerized workflows in all departments. The complexity of the different applications and departments should be primarily contained within their repositories in /applications/ and within their containers in /containers/.
- Avoid vendor lock-in by utilizing all open sources tools available on any hosting provider. Kubernetes gives us an application deployment framework that can be employed at any hosting provider.
- Handle deployment, provisioning, and orchestration between all services within an application to avoid engineering of deployment and maintenance processes across departments.
- Allow applications to be split into smaller containerized pieces. The goal is for this to be an iterative process where one UI may communicate with many microservices. Older code is converted into containerized services in a way so that SaaS users are unaware of the switch out.
- Allow developers within Simpleview to more easily move from product to product by providing a familiar working environment across departments. The containers will still be fully managed by the individual teams, and the tools those teams used will be determined by those teams.
- Be the platform where we can build a microservice system based on the concept of making all APIs externalizable similar to the initiatve put in place at Amazon via this memo.
Ensure you have followed the instructions at https://simpleviewtools.atlassian.net/wiki/spaces/ENG/pages/32080165/SV-Kubernetes to install the necessary software, setup git, setup github and your SSH keys.
Clone the repo to your local computer. Ensure your git client is setup properly to not tamper with line endings with AutoCrlf false
and SafeCrlf warn
, see this thread for more info.
Open a command prompt as Admin and cd
to the folder which you checked out this repository.
# windows cmd
vagrant up
SSH into the box at IP address: 192.168.50.100
Username: vagrant Password: vagrant
# ssh putty session (192.168.50.100)
sudo bash /sv/setup.sh
Now minikube, kubernetes, docker and helm should be running and your box is setup to add applications and containers.
Often in order to work your project you will want to install and start the following applications.
sv-kube-proxy
allows you to access resources on your box atkube.simpleview.io
.- If your application needs an additional nginx entry, please pull request it in to that repo.
sv-graphql
proxies to your application's graphql server. It can be accessed atgraphql.kube.simpleview.io
.- If your application needs an additional graphql, please pull request it in to that repo.
sv-geo
A microservice that returns geo location. Queries can be ran in the graphql playground atgraphql.kube.simpleview.io
.- This repo is not required but if you are planning on building a graphql microservice it is recommended to pull down so that you can pull up a functional graphql application to ensure your env is functional.
sudo sv install sv-kube-proxy
sudo sv start sv-kube-proxy local --build
sudo sv install sv-graphql
sudo sv start sv-graphql local --build
sudo sv install sv-geo
sudo sv start sv-geo local --build
Run sudo sv
for documentation within the VM.
- sv build - Build a container.
- sv install - Install an application.
- sv logs - Get logging information for a deployment.
- sv start - Start an application.
- sv stop - Stop an application.
- sv enterPod - Enter a running container.
- sv execPod - Execute a command on a running container.
- sv describePod - Show details of a specific pod.
- sv restartPod - Restart a pod in an application.
- sv test - Run tests for an application.
- sv editSecrets - Manage secrets for an application.
- sv debug - Output the versions and state of your local box for devops debugging purposes.
- sv fixDate - Re-syncs your linux clock, sometimes needed after the box was sleeping for a while.
- sv switchContext - Switch between Kubernetes Contexts.
- sv getContext - Get the current Kubernetes Context.
- sv listProjects - List available kubernetes projects.
If you are having problems with an sv-kubernetes application or the system itself, please see the documentation here.
If the docs do not help then speak up in the #devops slack channel or the #devchannel Google chat room.
Applications are written as Helm charts. Our sv
library wraps the capabilities of Helm and Kubernetes to ensure an easy development environment.
sv-kubernetes-example - A functioning example application.
The recommended approach is to utilize a single repo which contains your application and it's containers. In cases where you want to share containers with other applications, the recommendation is to keep those folders out of your application and instead build them separately.
- App Repo -
[department]-[name]
, examplesv-kubernetes-example
- /chart/ - Helm Chart
- Chart.yaml - required - The basic file for the project. See Helm Charts.yaml for documentation.
- The
name
in your Chart.yaml should exactly match the name of the repository. - values.yaml - optional - Variables loaded into your application templates.
- values_[env].yaml - optional - Variables to load specific to the environment.
- /templates/ - required - The folder to store templates for each resource in your application. It is recommended to keep one Kubernetes entity per file for simplicity.
- /containers/
- /
[NAME]
/- /lib/ - A folder that will likely be
COPY
'd into your container via theDockerfile
- Dockerfile
- /lib/ - A folder that will likely be
- /
- settings.yaml - Needed for specifying the version of your application for container tagging.
- readme.md - Documentation entrypoint for the application.
- version - string - The semver that will be appended to your compiled containers.
- dockerBase - string - The root of the docker registry which your container and tag are appended. E.g.
gcr.io/sv-shared-231700
. - buildOrder - array of string - The build order of the containers. Needed when doing multi-part docker builds that utilize a shared container.
- buildOrder_live - array of string - Build order for live, overwrites buildOrder if specified.
- buildOrder_qa - array of string - Build order for qa, overwrites buildOrder if specified.
- buildOrder_dev - array of string - Build order for dev, overwrites buildOrder if specified.
- buildOrder_local - array of string - Build order for local, overwrites buildOrder if specified.
- dependencies - array of object - Other applications and containers this repository needs installed to function.
- name - string - required - Name of the repository
- branch - string - default 'master' - The branch to checkout
- type - string - default 'app' - Whether the repository is a app repo or a container repo.
- secrets_key - string - The key used to encrypt the secrets for the project. All developers of the application need access to this key to build/run the application. When using a GCP secret you must prefix with
gcp:
- buildArgs -
BuildArgContainer[]
(see below) - Allow the passing of secrets and values to the Docker builder process.
interface BuildArgContainer {
/**
* The name of container within the application that will receive the build args.
*/
container: string
/**
* Array of arguments to pass to the container
*/
args: BuildArg[]
}
interface BuildArg {
/**
* Name of the argument that the container will receive. Should correspond with an ARG name in your Dockerfile.
*/
name: string
/**
* The path to populate the value for the argument.
* Use "values.keyName" to reference something from the values files.
* Use "secrets.keyName" to reference a secret.
* Use "secrets_env.keyName" to reference a env-specific secret.
*/
path: string
}
example:
version: 1.0.0
dockerBase: gcr.io/sv-shared-231700
buildOrder:
- container1
- container2
- container3
dependencies:
- name: sv-graphql-client
type: container
- name: sv-kube-proxy
secrets_key: gcp:projects/sv-shared-231700/locations/global/keyRings/kubernetes/cryptoKeys/default
buildArgs:
- container: test
args:
- name: SomeKey
path: "values.key"
- name: ASecretKey
path: "secrets.someSecretKey"
- name: AwesomeEnvSecret
path: "secrets_env.someOtherKey"
The .Values.sv
exposes values which can be utilized in application templates.
- sv
- ids - An object containing each "image:tag" reference with the Docker image_id. The value is a hash of the exact contents, to verify whether the container has changed.
- Recommended use-case is to refer to
checksum: "{{ index .Values.sv.ids $image }}"
. In theannotations
of your deployment.yaml template. This way the container will only restart if the checksum has changed.
- Recommended use-case is to refer to
- env - The current env dictated by the
sv start
command. - containerPath - The path to the
/containers/
folder within the application. This way you can use relative paths to your containers makingyaml
files more portable between projects. - applicationPath - The path to the folder of the application itself.
- deploymentName - The sv system boots the app as "app" in all non-test environments, but in test it named with the name of the branch so "crm-pull-5". In cases where this value needs to be none, you can use
{{ .Values.sv.deploymentName }}
and it will work in all envs. - tag - When loading in non-local environments the tag for containers is
env
. On local it's justlocal
. In pull requests it ispull-NUM
. Best practice is to utilize{{ .Values.sv.tag}}
to get the value of the tag in all environments. - dockerRegistry - The dockerRegistry prefix will be set to either
""
orsettings.dockerBase/
to allow you to prefix your image urls in all envs.
- ids - An object containing each "image:tag" reference with the Docker image_id. The value is a hash of the exact contents, to verify whether the container has changed.
Best Practices:
- In your kubernetes template files utilize the
{{ .Release.Name }}-name
for naming each component. This will pull the name from your Charts.yaml file so all of the portions of this application are clearly named. - It is recommended that you utilize variables at the top of each chart file. This allows you to reference those values in multiple places throughout your chartfile so you can change them in one place. It also assists with the possible versioning of chart files when you need to support multiple versions of a container simultaneously.
{{ $name := "server" }} {{ $version := "v1" }} {{ $fullName := printf "%s-%s-%s" .Release.Name $name $version }} {{ $image := printf "%s%s-%s-%s:%s" .Values.sv.dockerRegistry .Chart.Name $name $version .Values.sv.tag }}
- While you should use
{{ .Release.Name }}
when naming your kubernetes components, do not use{{ .Release.Name }}
when naming a docker image. This will prevent it from working on the test environment on pull requests. Instead ensure that your Chart.yaml has the right name and use{{ .Chart.Name }}
. - In your deployment files, utilize the checksum described above, to allow
sv start
to restart only the containers with changes. - On local it is recommended to mount a directory for content which changes frequently, such as html/css/js which does not require a process reboot. You'll want to ensure that you are doing a COPY for this content to ensure it works in non-local environments.
- Use secrets to secure and encrypt information such as DB passwords, tokens, and any proprietary data that your application needs.
- Always run
vagrant halt
to shutdown the environment prior to shutting down the host machine.
Containers are written as standard Docker containers.
sv-kubernetes-example-server - A functioning example container.
- Your docker container should be built in a way so that they ship functional for a remote environments, and then for local development directories can be mounted for the CMD/Entrypoint can be changed.
- In practice this means that on local you might mount in a hot folder, but elsewhere the
Dockerfile
will compile the necessary resources.
- In practice this means that on local you might mount in a hot folder, but elsewhere the
- Seek to minimize the number of layers in your Dockerfile while also maximizing the cache re-use. This means placing the actions which rarely change high in your file, and the actions which frequently change lower in the file.
- If you are using a local mount, ensure that you are performing a COPY for that content so the Dockerfile works in non-local environments.
The following are the recommended best practices for keeping your team notified about changes within a repository.
- Create/Use an existing channel within slack that is unique to your application. If possible, public is preferred, since you never know who might need notifications.
- Within that channel
/github subscribe simpleview/MY-APPLICATION
that will cause all github PRs, commits etc to notify the channel. - Go into the CircleCI -> find the project -> click the settings gear next to the project -> Chat notifications.
- Set the webhook URL to https://hooks.slack.com/services/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
- Set the room to the slack channel you are using.
- This will ensure that anyone subscribed to the channel will get notifications for each deployment and PR success via circle-ci.
- In your projects readme, include the name of the slack channel that people can use for being subscribed to notifications about your product.
This section provides information on how to switch contexts to a dev/test cluster.
- Acquire access to the cluster by following the documentation [here] (https://wiki.simpleviewtools.com/display/DEVOPS/Acquiring+Access+to+Kubernetes+Cluster)
- Ensure you are running the latest version of
sv-kubernetes
- Ensure you are gcloud authenticated by running this command in sv-kubernetes
sudo gcloud auth login
- When on a non-local cluster, commands that alter the clusters state, such as
start
orstop
must run throughhelm tiller
.- Ex:
sudo sv start sv-graphql local
now becomessudo helm tiller run sv start sv-graphql local
. - This does not affect commands like
sudo sv logs
orsudo sv enterPod
those can be run like normal.
- Ex:
- When on a non-local cluster
--build
will not work as expected, because it does not push the built container to the cloud, for the remote instance to grab it. If you need to alter the runtime state either do it viasudo sv enterPod
or push a new build via the normal CI/CD.
Related Commands:
- See all applications that are running -
sudo helm list
- See all that's running -
sudo kubectl get all
- See all that's running, including kubernetes system services -
sudo kubectl get all --all-namespaces
- Start an application and see the compiled configs -
sudo sv start [app] [env] --dry-run --debug
- Get a pods logs -
sudo kubectl logs [podname]
- See minikube logs -
sudo minikube logs
- See current config -
sudo kubectl config
- See current context -
sudo kubectl config current-context
- Run a container to debug -
sudo docker run -it image:tag
- Run a container with a specific command -
sudo docker run -it image:tag /bin/bash
- Enter a running container -
sudo kubectl exec -it [podName] /bin/bash
- Describe the nodes of a kubernetes cluster -
sudo kubectl describe nodes
- Describe pod for debugging pod boot errors -
sudo kubectl describe pod [podName]
Connecting to clusters
- List projects -
sudo gcloud projects list
- Switch project -
sudo gcloud config set project [project]
- Get cluster credentials -
sudo gcloud container clusters get-credentials [clusterName]
- Get available contexts -
sudo kubectl config get-contexts
- Switch to context -
sudo kubectl config use-context [context]
- Delete context -
sudo kubectl config delete-context [context]
sv-kubernetes applications are recommended to be setup with CI/CD using the following plan. This plan is handled by circleci and the sv-deploy-gce docker container.
- Pull requests trigger unit test execution via deployment to a cluster called test.
- Pushes to a branch trigger deployment to a kubernetes cluster aligning with the branch.
- develop -> dev
- qa -> qa
- staging -> staging
- master -> live
- In cases where you want to mandate unit test execution prior to deployment, utilize Github's branch protection feature to only allow merging via pull request. This way your pull request to that branch will have the unit tests executed, and then, upon completion and your approval, the merge will trigger the deployment.
- Images will be tagged with the branch and the version from the
settings.yaml
file. - It is not required for each department to utilize all environmental clusters. Their workflow and what works best for them is up to them.
- It is recommended that the branches from dev -> qa -> staging -> master are kept "in sync" so that master never has a commit which is not present on dev. This means you'll never want to push or PR directly to master, it should always come from the environment before it.
- The recommended development flow is PR features/bugs to develop -> merge -> pr develop to qa -> merge -> pr qa to staging -> merge -> pr staging to live.
- If you have a smaller department or don't need all environments, then simplify the flow to something like PR to (qa, dev, staging) -> merge -> pr to live -> merge. Whatever model you choose, have all tickets utilize the same release pathway.
Setting up CI/CD is relative easy but there are a few pitfalls to make sure it's working. Most of it doesn't have to do with the CI/CD system itself, but rather making sure your application is ready to function in non-live environments.
- While running in non-local environments your mounted folders will not work. The
Dockerfile
in your containers needs to be setup assuming it'slive
configuration. So if you need to run webpack or some other utility to compile code, your Dockerfile should be doing that. - Copy the
.circleci
folder from thesv-kubernetes-example
project into the root of your project. - Ensure that you have a
settings.yaml
file in the root of your project. This should have aversion: SEMVER
inside it. This will be utilized in tagging your images in each environment with your current application's version. - In your deployments for the container setup, you will likely want
imagePullPolicy: Always
in all non-local environments. See thesv-kubernetes-example
app for an example. - In your deployments for the container setup, you will need to be calculating your
$image
dynamically to include your base image + the dynamically generated tag.- An example of this is
{{ $image := printf "%s:%s" .Values.imageBase .Values.sv.tag }}
. The{{ .Values.imageBase }}
is an arbitrary variable. It could be anything from your values files. The.Values.sv.tag
is the dynamically generated tag by the CI/CD system. This ensures the right image is loaded in the right env. - If you need a multi-container example see
sv-auth
as an example.
- An example of this is
- If you are exposing a front-end service with a hostname the recommendation is to go through the
sv-kube-proxy
as it will provide a consistent IP and SSL termination. If you need a cert or a service published, submit a pull request to that repository or contact Owen. - If you need a consistent IP address for routing for
sv-kube-proxy
orsv-graphql
utilize internal load balancer- Add the annotation
cloud.google.com/load-balancer-type: "Internal"
- Use
type: LoadBalancer
andloadBalancerIP: IP
. The IP address should take your root CIDR and start at 10. So for the sv-shared project the root CIDR is10.0.0.0/24
so the first static IP would be assigned at10.0.0.10
. For thesv-crm
project the root CIDR is10.1.0.0/24
so the first static IP would be10.1.0.10
. Increment from there as more IP addresses are added.- sv-shared static ips -
10.0.0.10
and increment. - crm static ips -
10.1.0.10
and increment.
- sv-shared static ips -
- The IP address will be the same for all environments since each environment is on it's own VPC network allowing re-use of IP addresses. This also helps to simplify config.
- Add the annotation
Run tests
cd /sv/sv
sudo npm test
Edit secrets for the test application
sudo APPS_FOLDER=/sv/sv/testing/applications sv editSecrets settings-test --env local