Automate, automate, automate.
With Early Release ebooks, you get books in their earliest form—the author’s raw and unedited content as they write—so you can take advantage of these technologies long before the official release of these titles.
This will be the 12th chapter of the final book. The GitHub repo is available at https://github.com/hjwp/book-example.
If you have comments about how we might improve the content and/or examples in this book, or if you notice missing material within this chapter, please reach out to the author at [email protected].
Now that our server is up and running, we want to install our app on it, using our Docker image and container.
We could do this manually, but a key insight of modern software engineering is that small, frequent deployments are a must.[1] Frequent deployments rely on automation, so we’ll use an infrastructure automation tool called Ansible.
Automation is also key to making sure our tests give us true confidence over our deployments. If we go to the trouble of building a staging server,[2] we want to make sure that it’s as similar as possible to the production environment. By automating the way we deploy, and using the same automation for staging and prod, we give ourselves much more confidence.
The buzzword for automating your deployments these days is "Infrastructure as Code" (IaC).
Note
|
Why not ping me a note once your site is live on the web, and send me the URL? It always gives me a warm and fuzzy feeling… Email me at [email protected]. |
As part of my work on the third edition of the book, I’m making big changes to the deployment chapters. This chapter is still a little "bare bones" and could do with a bit more explanatory content and guidance, but the core steps are all there, so I hope you’ll be able to follow along.
So as always I really, really need feedback. So please hit me up at [email protected], or via GitHub Issues and Pull Requests.
I hope you enjoy the new version!
Let’s start using Ansible a little more seriously. We’re not going to jump all the way to the end though! Baby steps, as always. Let’s see if we can get it to run a simple "hello world" Docker container on our server.
Let’s delete the old content which had the "ping", and replace it with something like this:
---
- hosts: all
tasks:
- name: Install docker #(1)
ansible.builtin.apt: #(2)
name: docker.io #(3)
state: latest
update_cache: true
become: true
- name: Run test container
community.docker.docker_container:
name: testcontainer
state: started
image: busybox
command: echo hello world
become: true
-
An Ansible playbook is a series of "tasks"; we now have more than one. In that sense it’s still quite sequential and procedural, but the individual tasks themselves are quite declarative. Each one usually has a human-readable
name
attribute. -
Each task uses an Ansible "module" to do its work. This one uses the
builtin.apt
module which provides a wrapper around theapt
Debian & Ubuntu package management tool. -
Each module then provides a bunch of parameters which control how it works. Here we specify the
name
of the package we want to install ("docker.io"[3]) and tell it to update its cache first, which is required on a fresh server.
Most Ansible modules have pretty good documentation,
check out the builtin.apt
one for example;
I often skip to the
Examples section.
Let’s re-run our deployment command, ansible-playbook
,
with the same flags we used in the last chapter.
$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -vv ansible-playbook [core 2.16.3] config file = None [...] No config file found; using defaults BECOME password: Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: deploy-playbook.yaml ********************************************** 1 plays in infra/deploy-playbook.yaml PLAY [all] ******************************************************************** TASK [Gathering Facts] ******************************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:2 ok: [staging.ottg.co.uk] PLAYBOOK: deploy-playbook.yaml ********************************************** 1 plays in infra/deploy-playbook.yaml TASK [Install docker] ********************************************************* task path: ...goat-book/superlists/infra/deploy-playbook.yaml:6 ok: [staging.ottg.co.uk] => {"cache_update_time": 1708981325, "cache_updated": true, "changed": false} TASK [Install docker] ************************************************************************************************************* task path: ...goat-book/superlists/infra/deploy-playbook.yaml:6 changed: [staging.ottg.co.uk] => {"cache_update_time": [...] "cache_updated": true, "changed": true, "stderr": "", "stderr_lines": [], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading [...] information...\nThe following additional packages will be installed:\n wmdocker\nThe following NEW packages will be installed:\n docker wmdocker\n0 TASK [Run test container] ***************************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:13 changed: [staging.ottg.co.uk] => {"changed": true, "container": {"AppArmorProfile": "docker-default", "Args": ["hello", "world"], "Config": [...] PLAY RECAP ******************************************************************** staging.ottg.co.uk : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I don’t know about you, but whenever I make a terminal spew out a stream of output, I like to make little brrp brrp brrp noises, a bit like the computer Mother, in Alien. Ansible scripts are particularly satisfying in this regard.
Tip
|
You may need to use the --ask-become-pass argument to ansible-playbook
if you get an error "Missing sudo password".
|
Ansible looks like it’s doing its job, but let’s practice our SSH skills, and do some good old-fashioned sysadmin. Let’s log into our server and see if we can see any actual evidence that our container has run.
We use docker ps -a
to view all containers, including old/stopped ones,
and we can use docker logs
to view the output from one of them:
$ ssh [email protected] Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-67-generic x86_64) [...] elspeth@server$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3a2e600fbe77 busybox "echo hello world" 2 days ago Exited (0) 10 minutes ago testcontainer elspeth@server:$ docker logs testcontainer hello world
Tip
|
Look out for that elspeth@server
in the command-line listings in this chapter.
It indicates commands that must be run on the server,
as opposed to commands you run on your own PC.
|
SSHing in to check things worked is a key server debugging skill! It’s something we want to practice on our staging server, because ideally we’ll want to avoid doing it on production machines.
Let’s move on to trying to get our actual docker container running on the server. As we go through, you’ll see that we’re going to work through very similar issues to the ones we’ve already figured our way through in the last couple of chapters:
-
Configuration
-
Networking
-
And the database.
Typically, you can "push" and "pull" container images to a "container registry" — Docker offers a public one called DockerHub, and organisations will often run private ones, hosted by cloud providers like AWS.
So your process of getting an image onto a server is usually
-
Push the image from your machine to the registry
-
Pull the image from the registry onto the server. Usually this step is implicit, in that you just specify the image name in the format registry-url/image-name:tag, and then
docker run
takes care of pulling down the image for you.
But I don’t want to ask you to create a DockerHub account, or implicitly endorse any particular provider, so we’re going to "simulate" this process by doing it manually.
It turns out you can "export" a container image to an archive format, manually copy that to the server, and then re-import it. In Ansible config, it looks like this:
---
- hosts: all
tasks:
- name: Install docker
ansible.builtin.apt:
name: docker.io
state: latest
become: true
- name: Export container image locally # (1)
community.docker.docker_image:
name: superlists
archive_path: /tmp/superlists-img.tar
source: local
delegate_to: 127.0.0.1
- name: Upload image to server # (2)
ansible.builtin.copy:
src: /tmp/superlists-img.tar
dest: /tmp/superlists-img.tar
- name: Import container image on server # (3)
community.docker.docker_image:
name: superlists
load_path: /tmp/superlists-img.tar
source: load
force_source: true # (4)
state: present
become: true
- name: Run container
community.docker.docker_container:
name: superlists
image: superlists
state: started
recreate: true
Note
|
Colima users on MacOS may need to set an env var to get the ansible-docker
integration to work in the "Export container image locally" stage:
DOCKER_HOST=unix:///$HOME/.colima/default/docker.sock
|
-
We export the docker image to a
.tar
file by using thedocker_image
module with thearchive_path
set to temp file, and setting thedelegate_to
attribute to say we’re running that command on our local machine rather than the server. -
We then use the
copy
module to upload the tarfile to the server -
And we use
docker_image
again but this time withload_path
andsource: load
to import the image back on the server -
the
force_source
flag tells the server to attempt the import, even if an image of that name already exists.
Let’s run the new version of our playbook, and see if we can upload a docker image to our server and get it running:
$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -vv [...] PLAYBOOK: deploy-playbook.yaml ********************************************** 1 plays in infra/deploy-playbook.yaml PLAY [all] ******************************************************************** TASK [Gathering Facts] ******************************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:2 ok: [staging.ottg.co.uk] TASK [Install docker] ********************************************************* task path: ...goat-book/superlists/infra/deploy-playbook.yaml:5 ok: [staging.ottg.co.uk] => {"cache_update_time": 1708982855, "cache_updated": false, "changed": false} TASK [Export container image locally] ***************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:11 changed: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": ["Archived image superlists:latest to /tmp/superlists-img.tar, overwriting archive with image 11ff3b83873f0fea93f8ed01bb4bf8b3a02afa15637ce45d71eca1fe98beab34 named superlists:latest"], "changed": true, "image": {"Architecture": "amd64", [...] TASK [Upload image to server] ************************************************* task path: ...goat-book/superlists/infra/deploy-playbook.yaml:18 changed: [staging.ottg.co.uk] => {"changed": true, "checksum": "313602fc0c056c9255eec52e38283522745b612c", "dest": "/tmp/superlists-img.tar", [...] TASK [Import container image on server] *************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:23 changed: [staging.ottg.co.uk] => {"actions": ["Loaded image superlists:latest from /tmp/superlists-img.tar"], "changed": true, "image": {"Architecture": "amd64", "Author": "", "Comment": "buildkit.dockerfile.v0", "Config": [...] TASK [Run container] ********************************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:32 changed: [staging.ottg.co.uk] => {"changed": true, "container": {"AppArmorProfile": "docker-default", "Args": ["--bind", ":8888", "superlists.wsgi:application"], "Config": {"AttachStderr": true, "AttachStdin": false, "AttachStdout": true, "Cmd": ["gunicorn", "--bind", ":8888", "superlists.wsgi:application"], "Domainname": "", "Entrypoint": null, "Env": [...]
For completeness, let’s also add a step to explicitly build the image locally.
This means we don’t have a dependency on having run docker build
locally.
- name: Install docker
[...]
- name: Build container image locally
community.docker.docker_image:
name: superlists
source: build
state: present
build:
path: ..
platform: linux/amd64 # (1)
force_source: true
delegate_to: 127.0.0.1
- name: Export container image locally
[...]
-
I needed this
platform
attribute to work around an issue with compatibility between Apple’s new ARM-based chips and our server’s x86/amd64 architecture. You could also use thisplatform:
to cross-build docker images for a Rasberry Pi from a regular PC, or vice-versa. It does no harm in any case.
Now let’s see if it works!
$ ssh [email protected] Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-67-generic x86_64) [...] elspeth@server$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3a2e600fbe77 busybox "echo hello world" 2 days ago Exited (0) 10 minutes ago testcontainer 129e36a42190 superlists "/bin/sh -c \'gunicor…" About a minute ago Exited (3) About a minute ago superlists elspeth@server:$ docker logs superlists [2024-02-26 22:19:15 +0000] [1] [INFO] Starting gunicorn 21.2.0 [2024-02-26 22:19:15 +0000] [1] [INFO] Listening at: http://0.0.0.0:8888 (1) [2024-02-26 22:19:15 +0000] [1] [INFO] Using worker: sync [...] File "/src/superlists/settings.py", line 22, in <module> SECRET_KEY = os.environ["DJANGO_SECRET_KEY"] ~~~~^^^^^^^ File "<frozen os>", line 685, in getitem KeyError: 'DJANGO_SECRET_KEY' [2024-02-26 22:19:15 +0000] [7] [INFO] Worker exiting (pid: 7) [2024-02-26 22:19:15 +0000] [1] [ERROR] Worker (pid:7) exited with code 3 [2024-02-26 22:19:15 +0000] [1] [ERROR] Shutting down: Master [2024-02-26 22:19:15 +0000] [1] [ERROR] Reason: Worker failed to boot.
Whoops, we need to set those environment variables on the server too.
Note
|
If you see an error saying "Error connecting: Error while fetching server API version",
it may be because the Python Docker SDK can’t find your docker daemon.
Try restarting Docker Desktop if you’re on Windows or a Mac.
If you’re not using the standard docker engine, with Colima for example,
you may need to set the DOCKER_HOST environment variable
or use a symlink to point to the right place.
See the
Colima FAQ.
|
When we run our container manually locally, we can pass in environment variables with the -e
flag.
But we don’t want to hard-code secrets like SECRET_KEY into our Ansible files
and commit them to our repo!
Instead, we can use Ansible to automate the creation of a secret key, and then save it to a file on the server, where it will be relatively secure (better than saving it to version control and pushing it to GitHub in any case!)
We can use a so-called "env file" to store environment variables.
Env files are essentially a list of key-value pairs using shell syntax,
a bit like you’d use with export
.
One extra subtlety is that we want to vary the actual contents of the env file, depending on where we’re deploying to. Each server should get its own unique secret key, and we want different config for staging and prod, for example.
So, just as we inject variables into our html templates in Django, we can use a templating language called "jinja2" to have variables in our env file. It’s a common tool in Ansible scripts, and the syntax is very similar to Django’s.
Here’s what our template for the env file will look like:
DJANGO_DEBUG_FALSE=1
DJANGO_SECRET_KEY={{ secret_key }}
DJANGO_ALLOWED_HOST={{ host }}
And here’s how we use it in the provisioning script:
- name: Import container image on server
[...]
- name: Ensure .env file exists
ansible.builtin.template: #(1)
src: env.j2
dest: ~/superlists.env
force: false # do not recreate file if it already exists. (2)
vars: # (3)
host: "{{ inventory_hostname }}" # (4)
secret_key: "{{ lookup('password', '/dev/null length=32 chars=ascii_letters') }}" # (5)
- name: Run container
community.docker.docker_container:
name: superlists
image: superlists
state: started
recreate: true
env_file: ~/superlists.env # (6)
-
We use
ansible.builtin.template
to specify the local template file to use (src
), and the destination (dest
) on the server -
force: false
means we will only write the file once. So after the first time we generate our secret key, it won’t change. -
The
vars
section defines the variables we’ll inject into our template. -
We actually use a built-in Ansible variable called
inventory_hostname
. This variable would actually be available in the template already, but I’m renaming it for clarity. -
This
lookup('password')
thing I copy-pasted from StackOverflow. Come on there’s no shame in that. -
Here’s where Ansible tells Docker to use our env file when it runs our container.
Note
|
Using an env file to store secrets is definitely better than committing it to version control, but it’s maybe not the state of the art either. You’ll probably come across more advanced alternatives from various cloud providers, or Hashicorp’s Vault tool. |
Infrastructure-as-code tools like Ansible aim to be "declarative", meaning that, as much as possible, you specify the desired state that you want, rather than specifying a series of steps to get there.
This concept goes along with the idea of "idempotence", which is is when you want a thing that has the same effect, whether it is run just once, or multiple times.
An example is the apt
module that we used to install docker.
It doesn’t crash if docker is already installed, and in fact,
Ansible is smart enough to check first before trying to install anything.
There is some subtlety here, for example, our templated env file will only be written once, so the step is idempotent in the sense that it doesn’t overwrite the file with a new random secret key every time you run it. But that does come with the downside that you can’t easily add new variables to the file.
Probably a more sophisticated solution involving separate files for the secret and other parts of the config would be better, but I wanted to keep this (already long) chapter as simple as possible.
Let’s run the latest version of our playbook and see how our tests get on:
$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -v [...] PLAYBOOK: deploy-playbook.yaml ********************************************** 1 plays in infra/deploy-playbook.yaml PLAY [all] ******************************************************************** TASK [Gathering Facts] ******************************************************** ok: [staging.ottg.co.uk] TASK [Install docker] ********************************************************* ok: [staging.ottg.co.uk] => {"cache_update_time": 1709136057, "cache_updated": false, "changed": false} TASK [Build container image locally] ****************************************** changed: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": ["Built image [...] TASK [Export container image locally] ***************************************** changed: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": ["Archived image [...] TASK [Upload image to server] ************************************************* changed: [staging.ottg.co.uk] => {"changed": true, [...] TASK [Import container image on server] *************************************** changed: [staging.ottg.co.uk] => {"actions": ["Loaded image [...] TASK [Ensure .env file exists] ************************************************ changed: [staging.ottg.co.uk] => {"changed": true, [...] TASK [Run container] ********************************************************** changed: [staging.ottg.co.uk] => {"changed": true, "container": [...] PLAY RECAP ******************************************************************** staging.ottg.co.uk : ok=8 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Looks good! What do our tests think?
We run our tests as usual and run into a new problem:
$ TEST_SERVER=staging.ottg.co.uk python src/manage.py test functional_tests [...] selenium.common.exceptions.WebDriverException: Message: Reached error page: about:neterror?e=connectionFailure&u=http%3A//staging.ottg.co.uk/[...]
That neterror
makes me think it’s another networking problem.
Note
|
If your domain provider puts up a temporary holding page, you may get a 404 rather than a connection error at this point, and the traceback might have NoSuchElementException instead. |
Let’s try our standard debugging technique, of using curl
both locally and then from inside the container on the server.
First, on our own machine:
$ curl -iv staging.ottg.co.uk [...] curl: (7) Failed to connect to staging.ottg.co.uk port 80 after 25 ms: Couldn't connect to server
Note
|
Similarly, depending on your domain/hosting provider, you may see "Host not found" here instead. |
Now let’s ssh in to our server and take a look at the docker logs:
elspeth@server$ docker logs superlists [2024-02-28 22:14:43 +0000] [7] [INFO] Starting gunicorn 21.2.0 [2024-02-28 22:14:43 +0000] [7] [INFO] Listening at: http://0.0.0.0:8888 (7) [2024-02-28 22:14:43 +0000] [7] [INFO] Using worker: sync [2024-02-28 22:14:43 +0000] [8] [INFO] Booting worker with pid: 8
No errors there. Let’s try our curl
:
elspeth@server$ curl -iv localhost * Trying 127.0.0.1:80... * connect to 127.0.0.1 port 80 failed: Connection refused * Trying ::1:80... * connect to ::1 port 80 failed: Connection refused * Failed to connect to localhost port 80 after 0 ms: Connection refused * Closing connection 0 curl: (7) Failed to connect to localhost port 80 after 0 ms: Connection refused
Hmm, curl
fails on the server too.
But all this talk of port 80
, both locally and on the server, might be giving us a clue.
Let’s check docker ps
:
elspeth@server:$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1dd87cbfa874 superlists "/bin/sh -c 'gunicor…" 9 minutes ago Up 9 minutes superlists
This might be ringing a bell now—we forgot the ports.
We want to map port 8888 inside the container as port 80 (the default web/http port) on the server:
- name: Run container
community.docker.docker_container:
name: superlists
image: superlists
state: started
recreate: true
env_file: ~/superlists.env
ports: 80:8888
That gets us to:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: [id="id_list_table"]; [...]
Taking a look at the logs from the server, we can see that the database is not initialised:
$ ssh elspeth@server docker logs superlists [...] django.db.utils.OperationalError: no such table: lists_list
We need to mount the db.sqlite3
file from the filesystem outside the container,
just like we did in local dev, and we need to run migrations each time we deploy too.
Here’s how to do that in our playbook:
- name: Ensure db.sqlite3 file exists outside container
ansible.builtin.file:
path: /home/elspeth/db.sqlite3
state: touch # (1)
- name: Run container
community.docker.docker_container:
name: superlists
image: superlists
state: started
recreate: true
env_file: ~/superlists.env
mounts: # (2)
- type: bind
source: /home/elspeth/db.sqlite3
target: /src/db.sqlite3
ports: 80:8888
- name: Run migration inside container
community.docker.docker_container_exec: # (3)
container: superlists
command: ./manage.py migrate
-
We use
file
withstate=touch
to make sure a placeholder file exists before we try and mount it in -
Here is the
mounts
config, which works a lot like the--mount
flag todocker run
. -
And we use the API for
docker exec
to run the migration command inside the container.
Let’s give that playbook a run and…
$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -v [...] TASK [Run migration inside container] ***************************************** changed: [staging.ottg.co.uk] => {"changed": true, "rc": 0, "stderr": "", "stderr_lines": [], "stdout": "Operations to perform:\n Apply all migrations: auth, contenttypes, lists, sessions\nRunning migrations:\n Applying contenttypes.0001_initial... OK\n Applying contenttypes.0002_remove_content_type_name... OK\n Applying auth.0001_initial... OK\n Applying auth.0002_alter_permission_name_max_length... OK\n Applying [...] PLAY RECAP ******************************************************************** staging.ottg.co.uk : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Hooray
$ TEST_SERVER=staging.ottg.co.uk python src/manage.py test functional_tests Found 3 test(s). [...] ... --------------------------------------------------------------------- Ran 3 tests in 13.537s OK
A few more places to look and things to try, now that we’ve introduced Docker into the mix, should things not go according to plan—all of these should be run on the server, inside an SSH session:
-
You can check the Container logs using
docker logs superlists
. -
You can get detailed info on the Container using
docker inspect superlists
. This is a good place to go check on environment variables, port mappings, and exactly which image was running, for example. -
You can inspect the image with
docker image inspect superlists
. You might need this to check the exact image hash, to make sure it’s the same one you built locally.
You now have a live website! Tell all your friends! Tell your mum, if no one else is interested! Or, tell me! I’m always delighted to see a new reader’s site! [email protected]
In the next chapter, it’s back to coding again.
There’s no such thing as the One True Way in deployment; I’ve tried to set you off on a reasonably sane path, but there are plenty of things you could do differently, and lots, lots more to learn besides. Here are some resources I used for inspiration:
-
The 12-factor App by the Heroku team
-
Solid Python Deployments for Everybody by Hynek Schlawack
-
The deployment chapter of Two Scoops of Django by Dan Greenfeld and Audrey Roy
]
Here’s a brief recap of what we’ve been through, which are a fairly typical set of steps for deployment in general
-
Provisioning a server. This tends to be vendor-specific, so we didn’t automate it, but you absolutely can!
-
Installing system dependencies - in our case, it was mainly Docker, but inside the Docker image, we also had some system dependencies too, like Python itself.
-
Getting our application code (or "artifacts") onto the server. In our case, since we’re using Docker, the thing we needed to transfer was a Docker image. We used a manual process, but typically you’d push and pull to an image repository.
-
Setting environment variables and secrets. Depending on how you need to vary them, you can set environment variables on your local PC, in a Dockerfile, in your Ansible scripts, or on the server itself. Figuring out which to use in which case is a big part of deployment.
-
Attaching to the Database. In our case we mount a file from the local filesystem. More typically, you’d be supplying some environment variables and secrets to define a host, port, username and password to use for accessing a database server.
-
Configuring networking and port mapping. This includes DNS config, as well as Docker configuration. Web apps need to be able to talk to the outside world!
-
Running Database migrations. We’ll revisit this later in the book, but migrations are one of the most risky part of a deployment, and automating them is a key part of reducing that risk.
-
Switching across to the new version of our application. In our case, we stop the old container and start a new one. In more advanced setups, you might be trying to achieve zero-downtime deploys, and looking into techniques like red-green deployments.
Every single aspect of deployment can and probably should be automated. Here are a couple of general principles to think about when implementing infrastructure-as-code:
- Idempotence
-
If your deployment script is deploying to existing servers, you need to design them so that they work against a fresh installation and against a server that’s already configured.
- Declarative
-
As much as possible, we want to try and specify what we want the state to be on the server, rather than how we should get there. This goes hand-in-hand with the idea of idempotence above.