Skip to content

Commit

Permalink
Bump up to version 0.3.0 (#371)
Browse files Browse the repository at this point in the history
* Update VERSION_NUMBER

* Update paddle_inference.cmake

* Delete docs directory

* release new docs

* update version number

* add vision result doc

* update version

* fix dead link

* fix vision

* fix dead link

* Update README_EN.md

* Update README_EN.md

* Update README_EN.md

* Update README_EN.md

* Update README_EN.md

* Update README_CN.md

* Update README_EN.md

* Update README_CN.md

* Update README_EN.md

* Update README_CN.md

* Update README_EN.md

* Update README_EN.md

Co-authored-by: leiqing <[email protected]>
  • Loading branch information
jiangjiajun and leiqing1 authored Oct 15, 2022
1 parent bac1728 commit 3ff562a
Show file tree
Hide file tree
Showing 174 changed files with 322 additions and 6,775 deletions.
10 changes: 9 additions & 1 deletion README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,15 @@

## 近期更新

- 🔥 **2022.8.18:发布FastDeploy [release/v0.2.0](https://github.com/PaddlePaddle/FastDeploy/releases/tag/release%2F0.2.0)** <br>
- 🔥 **2022.10.15:Release FastDeploy [release v0.3.0](https://github.com/PaddlePaddle/FastDeploy/tree/release%2F0.3.0)** <br>
- **New server-side deployment upgrade:更快的推理性能,一键量化,更多的视觉和NLP模型**
- 集成 OpenVINO 推理引擎,并且保证了使用 OpenVINO 与 使用 TensorRT、ONNX Runtime、 Paddle Inference一致的开发体验;
- 提供[一键模型量化工具](tools/quantization),支持YOLOv7、YOLOv6、YOLOv5等视觉模型,在CPU和GPU推理速度可提升1.5~2倍;
- 新增加 PP-OCRv3, PP-OCRv2, PP-Matting, PP-HumanMatting, ModNet 等视觉模型并提供[端到端部署示例](examples/vision)
- 新增加NLP信息抽取模型 UIE 并提供[端到端部署示例](examples/text/uie).
-

- 🔥 **2022.8.18:发布FastDeploy [release/v0.2.0](https://github.com/PaddlePaddle/FastDeploy/tree/release%2F0.2.0)** <br>
- **服务端部署全新升级:更快的推理性能,更多的视觉模型支持**
- 发布基于x86 CPU、NVIDIA GPU的高性能推理引擎SDK,推理速度大幅提升
- 集成Paddle Inference、ONNX Runtime、TensorRT等推理引擎并提供统一的部署体验
Expand Down
25 changes: 16 additions & 9 deletions README_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,17 +28,24 @@ English | [简体中文](README_CN.md)
| **Face Alignment** | **3D Object Detection** | **Face Editing** | **Image Animation** |
| <img src='https://user-images.githubusercontent.com/54695910/188059460-9845e717-c30a-4252-bd80-b7f6d4cf30cb.png' height="126px" width="190px"> | <img src='https://user-images.githubusercontent.com/54695910/188270227-1a4671b3-0123-46ab-8d0f-0e4132ae8ec0.gif' height="126px" width="190px"> | <img src='https://user-images.githubusercontent.com/54695910/188054663-b0c9c037-6d12-4e90-a7e4-e9abf4cf9b97.gif' height="126px" width="126px"> | <img src='https://user-images.githubusercontent.com/54695910/188056800-2190e05e-ad1f-40ef-bf71-df24c3407b2d.gif' height="126px" width="190px"> |

## Updates
## 📣 Recent Updates

- 🔥 **2022.8.18:Release FastDeploy [release/v0.2.0](https://github.com/PaddlePaddle/FastDeploy/releases/tag/release%2F0.2.0)** <br>
- **New server-side deployment upgrade: faster inference performance, support more vision model**
- 🔥 **2022.10.15:Release FastDeploy [release v0.3.0](https://github.com/PaddlePaddle/FastDeploy/tree/release/0.3.0)** <br>
- **New server-side deployment upgrade: support more CV model and NLP model**
- Integrate OpenVINO and provide a seamless deployment experience with other inference engines include TensorRT、ONNX Runtime、Paddle Inference;
- Support [one-click model quantization](tools/quantization) to improve model inference speed by 1.5 to 2 times on CPU & GPU platform. The supported quantized model are YOLOv7, YOLOv6, YOLOv5, etc.
- New CV models include PP-OCRv3, PP-OCRv2, PP-TinyPose, PP-Matting, etc. and provides [end-to-end deployment demos](examples/vision/detection/)
- New information extraction model is UIE, and provides [end-to-end deployment demos](examples/text/uie).

- 🔥 **2022.8.18:Release FastDeploy [release v0.2.0](https://github.com/PaddlePaddle/FastDeploy/tree/release%2F0.2.0)** <br>
- **New server-side deployment upgrade: faster inference performance, support more CV model**
- Release high-performance inference engine SDK based on x86 CPUs and NVIDIA GPUs, with significant increase in inference speed
- Integrate Paddle Inference, ONNXRuntime, TensorRT and other inference engines and provide a seamless deployment experience
- Supports full range of object detection models such as YOLOv7, YOLOv6, YOLOv5, PP-YOLOE and provides [End-To-End Deployment Demos](examples/vision/detection/)
- Support over 40 key models and [Demo Examples](examples/vision/) including face detection, face recognition, real-time portrait matting, image segmentation.
- Integrate Paddle Inference, ONNX Runtime, TensorRT and other inference engines and provide a seamless deployment experience
- Supports full range of object detection models such as YOLOv7, YOLOv6, YOLOv5, PP-YOLOE and provides [end-to-end deployment demos](examples/vision/detection/)
- Support over 40 key models and [demo examples](examples/vision/) including face detection, face recognition, real-time portrait matting, image segmentation.
- Support deployment in both Python and C++
- **Supports Rockchip, Amlogic, NXP and other NPU chip deployment capabilities on edge device deployment**
- Release Lightweight Object Detection [Picodet-NPU Deployment Demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/linux/picodet_detection), providing the full quantized inference capability for INT8.
- Release Lightweight Object Detection [Picodet-NPU deployment demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/linux/picodet_detection), providing the full quantized inference capability for INT8.

## Contents

Expand Down Expand Up @@ -71,7 +78,7 @@ English | [简体中文](README_CN.md)
- python >= 3.6
- OS: Linux x86_64/macOS/Windows 10

##### Install Library with GPU Support
##### Install Fastdeploy SDK with CPU&GPU support

```bash
pip install fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
Expand All @@ -83,7 +90,7 @@ pip install fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdep
conda config --add channels conda-forge && conda install cudatoolkit=11.2 cudnn=8.2
```

##### Install CPU-only Library
##### Install Fastdeploy SDK with only CPU support

```bash
pip install fastdeploy-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
Expand Down
2 changes: 1 addition & 1 deletion VERSION_NUMBER
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.0.0
0.3.0
2 changes: 1 addition & 1 deletion cmake/paddle_inference.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ endif(WIN32)


set(PADDLEINFERENCE_URL_BASE "https://bj.bcebos.com/fastdeploy/third_libs/")
set(PADDLEINFERENCE_VERSION "2.4-dev")
set(PADDLEINFERENCE_VERSION "2.4-dev1")
if(WIN32)
if (WITH_GPU)
set(PADDLEINFERENCE_FILE "paddle_inference-win-x64-gpu-${PADDLEINFERENCE_VERSION}.zip")
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
278 changes: 0 additions & 278 deletions docs/api/function.md

This file was deleted.

Loading

0 comments on commit 3ff562a

Please sign in to comment.