Skip to content

Latest commit

 

History

History
205 lines (161 loc) · 4.78 KB

secondary-containers.md

File metadata and controls

205 lines (161 loc) · 4.78 KB

Secondary containers

More jupyter containers for separate projects

  • duplicate the /jupyter directory and give it a new project name
  • also duplicate the corresponding jupyter service in docker-compose.yml and match the new project's name
  • run it using the corresponding docker compose up --build YOUR_NEW_PROJECT_NAME command

Related containers for the same project

All of the following services can be brought up alongside the jupyter container by referencing them with the depends_on option in the docker-compose.yml's jupyter service. They can also be run by referencing them directly in the docker compose up --build WHATEVER_SERVICE command.

Once live, they can then be accessed as you would a local application outside docker, using the service name instead of localhost (eg http://elasticsearch:80 instead of http://localhost:80).

APIs

A FastAPI app can be put together quickly using the following Dockerfile and adding the service specification to your docker-compose.yml.

.
├── docker-compose.yml
└── api
    ├── Dockerfile
    ├── README.md
    ├── app
    │   └── main.py
    └── requirements.in
#api/Dockerfile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7

RUN pip install pip-tools
COPY requirements.in requirements.in
RUN pip-compile
RUN pip install -r requirements.txt

COPY ./app /app
#docker-compose.yml
services:
  api:
    build: api/
    env_file: .env
    ports:
      - 80:80
#main.py
from typing import Optional

from fastapi import FastAPI

app = FastAPI()


@app.get("/")
def read_root():
    return {"Hello": "World"}


@app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
    return {"item_id": item_id, "q": q}

Databases

It's often useful to experiment with a local database before relying on cloud resources. Configuration for a couple of common database types is included here.

Postgres

Adding the following service to your docker-compose.yml will make an postgres instance available on port 5432.

#docker-compose.yml
services:
  postgres:
    image: postgres
    restart: always
    env_file: .env
    ports:
      - 5432:5432
    volumes:
      - type: bind
        source: ./data/postgres
        target: /var/lib/postgresql/data/

with the following directories created to store (and persist) the data

.
├── docker-compose.yml
└── data
    └── postgres

and the following in a root-level .env file

#.env
POSTGRES_USER=user
POSTGRES_PASSWORD=password
POSTGRES_DB=database

Elasticsearch

Adding the following service to your docker-compose.yml will make an elasticsearch instance available on port 9200. The kibana service included here is optional.

#docker-compose.yml
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
    volumes:
      - type: bind
        source: ./data/elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - 9200:9200
    env_file: .env
    environment:
      discovery.type: single-node
      ES_JAVA_OPTS: -Xms3g -Xmx3g

  kibana:
    image: docker.elastic.co/kibana/kibana:7.13.2
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch
    environment:
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200

with the following directories created to store (and persist) the data

.
├── docker-compose.yml
└── data
    └── elasticsearch

and the following in a root-level .env file

#.env
ELASTIC_HOST=http://elasticsearch:9200
ELASTIC_USERNAME=elastic
ELASTIC_PASSWORD=changeme

Neo4j

Adding the following service to your docker-compose.yml will make a neo4j instance available on port 7474, with APOC and graph data science plugins installed.

#docker-compose.yml
services:
  neo4j:
    image: neo4j:latest
    volumes:
      - type: bind
        source: ./data/neo4j/data
        target: /data
      - type: bind
        source: ./data/neo4j/logs
        target: /logs
    ports:
      - 7474:7474
      - 7687:7687
    env_file: .env
    environment:
      - NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
      - NEO4JLABS_PLUGINS=["graph-data-science", "apoc"]
      - NEO4J_dbms_security_procedures_whitelist=gds.*, apoc.*
      - NEO4J_dbms_security_procedures_unrestricted=gds.*, apoc.*

with the following directories created to store (and persist) the data

.
├── docker-compose.yml
└── data
    └── neo4j
        └── data
        └── logs

and the following in a root-level .env file

#.env
NEO4J_HOST=http://neo4j:7474
NEO4J_AUTH=neo4j/password