Skip to content

Oneflow-Inc/serving

Repository files navigation

OneFlow Serving

Docker Image Version Docker Pulls License PRs Welcome

Currently, we have implemented an oneflow-backend for the Triton Inference Server that enables model serving.

Triton Inference Server OneFlow Backend

OneFlow Backend For Triton Inference Server

Get Started

Here is a tutorial about how to export the model and how to deploy it. You can also follow the instructions below to get started.

  1. Download and save model
cd examples/resnet50/
python3 export_model.py
  1. Launch triton server
cd ../../  # back to root of the serving
docker run --rm --runtime=nvidia --network=host -v$(pwd)/examples:/models \
  oneflowinc/oneflow-serving
curl -v localhost:8000/v2/health/ready  # ready check
  1. Send images and predict
pip3 install tritonclient[all]
cd examples/resnet50/
curl -o cat.jpg https://images.pexels.com/photos/156934/pexels-photo-156934.jpeg
python3 client.py --image cat.jpg

Documentation

Known Issues

Multiple model instance execution

The current version of oneflow does not support concurrent execution of multiple model instances. You can launch multiple containers (which is easy with Kubernetes) to bypass this limitation.