diff --git a/packages/diffusion/sd-next/Dockerfile b/packages/diffusion/sd-next/Dockerfile
new file mode 100644
index 000000000..9dee7e27b
--- /dev/null
+++ b/packages/diffusion/sd-next/Dockerfile
@@ -0,0 +1,38 @@
+#---
+# name: sd-next
+# group: diffusion
+# depends: [python, pycuda, protobuf:apt, numba, numpy, tensorflow2, opencv, pytorch, torchvision, transformers, xformers, huggingface_hub]
+# requires: '>=34.1.0'
+# docs: docs.md
+# notes: disabled on JetPack 4
+#---
+ARG BASE_IMAGE
+FROM ${BASE_IMAGE}
+
+ARG SD_NEXT_REPO=vladmandic/automatic
+ARG SD_NEXT_VERSION=master
+
+ADD https://api.github.com/repos/${SD_NEXT_REPO}/git/refs/heads/${SD_NEXT_VERSION} /tmp/sd_next_version.json
+
+RUN cd /opt && \
+ git clone --branch ${SD_NEXT_VERSION} --depth=1 https://github.com/${SD_NEXT_REPO} && \
+ cd automatic && \
+ sed 's|^huggingface_hub.*||' -i requirements.txt && \
+ sed 's|^transformers.*||' -i requirements.txt && \
+ sed 's|^protobuf.*||' -i requirements.txt && \
+ sed 's|^numba.*||' -i requirements.txt && \
+ sed 's|^numpy.*||' -i requirements.txt && \
+ cat requirements.txt && \
+ TENSORFLOW_PACKAGE=https://nvidia.box.com/shared/static/wp43cd8e0lgen2wdqic3irdwagpgn0iz.whl python3 ./launch.py --skip-torch --use-cuda --reinstall --test
+
+# partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline'
+RUN cd /opt && ./opencv_install.sh
+
+# set the cache dir for models
+ENV DIFFUSERS_CACHE=/data/models/diffusers
+
+COPY docker-entrypoint.sh /usr/local/bin
+
+WORKDIR /opt/automatic
+
+ENTRYPOINT ["docker-entrypoint.sh"]
diff --git a/packages/diffusion/sd-next/README.md b/packages/diffusion/sd-next/README.md
new file mode 100644
index 000000000..fee452477
--- /dev/null
+++ b/packages/diffusion/sd-next/README.md
@@ -0,0 +1,110 @@
+# sd-next
+
+> [`CONTAINERS`](#user-content-containers) [`IMAGES`](#user-content-images) [`RUN`](#user-content-run) [`BUILD`](#user-content-build)
+
+
+
+
+* SD.Next from https://github.com/vladmandic/automatic (found under `/opt/automatic`)
+* with TensorRT extension from https://github.com/AUTOMATIC1111/stable-diffusion-webui-tensorrt
+* see the tutorial at the [**Jetson Generative AI Lab**](https://nvidia-ai-iot.github.io/jetson-generative-ai-playground/tutorial_diffusion.html)
+
+This container has a default run command that will automatically start the webserver like this:
+
+```bash
+cd /opt/automatic && python3 launch.py \
+ --data=/data/models/stable-diffusion \
+ --enable-insecure-extension-access \
+ --xformers \
+ --listen \
+ --port=7860
+```
+
+After starting the container, you can navigate your browser to `http://$IP_ADDRESS:7860` (substitute the address or hostname of your device). The server will automatically download the default model ([`stable-diffusion-1.5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)) during startup.
+
+Other configuration arguments can be found at [AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings)
+
+* `--medvram` (sacrifice some performance for low VRAM usage)
+* `--lowvram` (sacrafice a lot of speed for very low VRAM usage)
+
+See the [`stable-diffusion`](/packages/diffusion/stable-diffusion) container to run image generation from a script (`txt2img.py`) as opposed to the web UI.
+
+### Tips & Tricks
+
+Negative prompts: https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/7857
+
+Stable Diffusion XL
+ * https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl
+ * https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
+ * https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0
+ * https://stable-diffusion-art.com/sdxl-model/
+
+CONTAINERS
+
+
+| **`stable-diffusion-webui`** | |
+| :-- | :-- |
+| Builds | [![`stable-diffusion-webui_jp51`](https://img.shields.io/github/actions/workflow/status/dusty-nv/jetson-containers/stable-diffusion-webui_jp51.yml?label=stable-diffusion-webui:jp51)](https://github.com/dusty-nv/jetson-containers/actions/workflows/stable-diffusion-webui_jp51.yml) [![`stable-diffusion-webui_jp60`](https://img.shields.io/github/actions/workflow/status/dusty-nv/jetson-containers/stable-diffusion-webui_jp60.yml?label=stable-diffusion-webui:jp60)](https://github.com/dusty-nv/jetson-containers/actions/workflows/stable-diffusion-webui_jp60.yml) |
+| Requires | `L4T >=34.1.0` |
+| Dependencies | [`build-essential`](/packages/build-essential) [`cuda`](/packages/cuda/cuda) [`cudnn`](/packages/cuda/cudnn) [`python`](/packages/python) [`tensorrt`](/packages/tensorrt) [`numpy`](/packages/numpy) [`cmake`](/packages/cmake/cmake_pip) [`onnx`](/packages/onnx) [`pytorch`](/packages/pytorch) [`torchvision`](/packages/pytorch/torchvision) [`huggingface_hub`](/packages/llm/huggingface_hub) [`rust`](/packages/rust) [`transformers`](/packages/llm/transformers) [`xformers`](/packages/llm/xformers) [`pycuda`](/packages/cuda/pycuda) [`opencv`](/packages/opencv) |
+| Dependants | [`l4t-diffusion`](/packages/l4t/l4t-diffusion) |
+| Dockerfile | [`Dockerfile`](Dockerfile) |
+| Images | [`dustynv/stable-diffusion-webui:r35.2.1`](https://hub.docker.com/r/dustynv/stable-diffusion-webui/tags) `(2023-12-05, 7.1GB)`
[`dustynv/stable-diffusion-webui:r35.3.1`](https://hub.docker.com/r/dustynv/stable-diffusion-webui/tags) `(2023-11-05, 7.1GB)`
[`dustynv/stable-diffusion-webui:r35.4.1`](https://hub.docker.com/r/dustynv/stable-diffusion-webui/tags) `(2023-11-05, 7.1GB)`
[`dustynv/stable-diffusion-webui:r36.2.0`](https://hub.docker.com/r/dustynv/stable-diffusion-webui/tags) `(2023-12-05, 8.8GB)` |
+| Notes | disabled on JetPack 4 |
+
+
+
+
+CONTAINER IMAGES
+
+
+| Repository/Tag | Date | Arch | Size |
+| :-- | :--: | :--: | :--: |
+| [`dustynv/stable-diffusion-webui:r35.2.1`](https://hub.docker.com/r/dustynv/stable-diffusion-webui/tags) | `2023-12-05` | `arm64` | `7.1GB` |
+| [`dustynv/stable-diffusion-webui:r35.3.1`](https://hub.docker.com/r/dustynv/stable-diffusion-webui/tags) | `2023-11-05` | `arm64` | `7.1GB` |
+| [`dustynv/stable-diffusion-webui:r35.4.1`](https://hub.docker.com/r/dustynv/stable-diffusion-webui/tags) | `2023-11-05` | `arm64` | `7.1GB` |
+| [`dustynv/stable-diffusion-webui:r36.2.0`](https://hub.docker.com/r/dustynv/stable-diffusion-webui/tags) | `2023-12-05` | `arm64` | `8.8GB` |
+
+> Container images are compatible with other minor versions of JetPack/L4T:
+> • L4T R32.7 containers can run on other versions of L4T R32.7 (JetPack 4.6+)
+> • L4T R35.x containers can run on other versions of L4T R35.x (JetPack 5.1+)
+
+
+
+RUN CONTAINER
+
+
+To start the container, you can use the [`run.sh`](/docs/run.md)/[`autotag`](/docs/run.md#autotag) helpers or manually put together a [`docker run`](https://docs.docker.com/engine/reference/commandline/run/) command:
+```bash
+# automatically pull or build a compatible container image
+./run.sh $(./autotag sd-next)
+
+# or explicitly specify one of the container images above
+./run.sh dustynv/sd-next:r35.2.1
+
+# or if using 'docker run' (specify image and mounts/ect)
+sudo docker run --runtime nvidia -it --rm --network=host dustynv/sd-next:r35.2.1
+```
+> [`run.sh`](/docs/run.md) forwards arguments to [`docker run`](https://docs.docker.com/engine/reference/commandline/run/) with some defaults added (like `--runtime nvidia`, mounts a `/data` cache, and detects devices)
+> [`autotag`](/docs/run.md#autotag) finds a container image that's compatible with your version of JetPack/L4T - either locally, pulled from a registry, or by building it.
+
+To mount your own directories into the container, use the [`-v`](https://docs.docker.com/engine/reference/commandline/run/#volume) or [`--volume`](https://docs.docker.com/engine/reference/commandline/run/#volume) flags:
+```bash
+./run.sh -v /path/on/host:/path/in/container $(./autotag sd-next)
+```
+To launch the container running a command, as opposed to an interactive shell:
+```bash
+./run.sh $(./autotag sd-next) my_app --abc xyz
+```
+You can pass any options to [`run.sh`](/docs/run.md) that you would to [`docker run`](https://docs.docker.com/engine/reference/commandline/run/), and it'll print out the full command that it constructs before executing it.
+
+
+BUILD CONTAINER
+
+
+If you use [`autotag`](/docs/run.md#autotag) as shown above, it'll ask to build the container for you if needed. To manually build it, first do the [system setup](/docs/setup.md), then run:
+```bash
+./build.sh sd-next
+```
+The dependencies from above will be built into the container, and it'll be tested during. See [`./build.sh --help`](/jetson_containers/build.py) for build options.
+
diff --git a/packages/diffusion/sd-next/docker-entrypoint.sh b/packages/diffusion/sd-next/docker-entrypoint.sh
new file mode 100755
index 000000000..c7f920d9e
--- /dev/null
+++ b/packages/diffusion/sd-next/docker-entrypoint.sh
@@ -0,0 +1,14 @@
+#!/usr/bin/env bash
+
+set -e
+
+cd /opt/automatic
+
+if [[ ! -z "${ACCELERATE}" ]] && [ ${ACCELERATE}="True" ] && [ -x "$(command -v accelerate)" ]
+then
+ echo "Launching accelerate launch.py..."
+ exec accelerate launch --num_cpu_threads_per_process=6 launch.py --data=/data/models/stable-diffusion --skip-all --use-xformers --use-cuda --listen --port=7860 "$@"
+else
+ echo "Launching launch.py..."
+ exec python3 launch.py --data=/data/models/stable-diffusion --skip-all --use-xformers --use-cuda --listen --port=7860 "$@"
+fi
diff --git a/packages/diffusion/sd-next/docs.md b/packages/diffusion/sd-next/docs.md
new file mode 100644
index 000000000..c76bbc383
--- /dev/null
+++ b/packages/diffusion/sd-next/docs.md
@@ -0,0 +1,36 @@
+
+
+
+* stable-diffusion-webui from https://github.com/AUTOMATIC1111/stable-diffusion-webui (found under `/opt/stable-diffusion-webui`)
+* with TensorRT extension from https://github.com/AUTOMATIC1111/stable-diffusion-webui-tensorrt
+* see the tutorial at the [**Jetson Generative AI Lab**](https://nvidia-ai-iot.github.io/jetson-generative-ai-playground/tutorial_diffusion.html)
+
+This container has a default run command that will automatically start the webserver like this:
+
+```bash
+cd /opt/stable-diffusion-webui && python3 launch.py \
+ --data=/data/models/stable-diffusion \
+ --enable-insecure-extension-access \
+ --xformers \
+ --listen \
+ --port=7860
+```
+
+After starting the container, you can navigate your browser to `http://$IP_ADDRESS:7860` (substitute the address or hostname of your device). The server will automatically download the default model ([`stable-diffusion-1.5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)) during startup.
+
+Other configuration arguments can be found at [AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings)
+
+* `--medvram` (sacrifice some performance for low VRAM usage)
+* `--lowvram` (sacrafice a lot of speed for very low VRAM usage)
+
+See the [`stable-diffusion`](/packages/diffusion/stable-diffusion) container to run image generation from a script (`txt2img.py`) as opposed to the web UI.
+
+### Tips & Tricks
+
+Negative prompts: https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/7857
+
+Stable Diffusion XL
+ * https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl
+ * https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
+ * https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0
+ * https://stable-diffusion-art.com/sdxl-model/
\ No newline at end of file