Skip to content

Commit

Permalink
update en docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Zheng-Bicheng committed May 16, 2024
1 parent 1220d6a commit 5ec9257
Show file tree
Hide file tree
Showing 3 changed files with 62 additions and 85 deletions.
26 changes: 12 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,21 +4,24 @@

# 1 Paddle2ONNX 简介

Paddle2ONNX 支持将 **PaddlePaddle** 模型格式转化到 **ONNX** 模型格式。通过 ONNX 可以完成将 Paddle 模型到多种推理引擎的部署,包括
TensorRT/OpenVINO/MNN/TNN/NCNN,以及其它对 ONNX 开源格式进行支持的推理引擎或硬件。
Paddle2ONNX 支持将 **PaddlePaddle** 模型格式转化到 **ONNX** 模型格式。通过 ONNX 可以完成将 Paddle 模型到多种推理引擎的部署,包括 TensorRT/OpenVINO/MNN/TNN/NCNN,以及其它对 ONNX 开源格式进行支持的推理引擎或硬件。

# 2 Paddle2ONNX 环境依赖

Paddle2ONNX 本身不依赖其他组件,但是我们建议您在以下环境下使用 Paddle2ONNX :

- PaddlePaddle == 2.6.0
- onnxruntime >= 1.10.0

# 3 安装 Paddle2ONNX

如果您只是想要安装 Paddle2ONNX 且没有二次开发的需求,你可以通过执行以下代码来快速安装 Paddle2ONNX

```
pip install paddle2onnx
```

开发用户,请按照[Github 源码安装方式](docs/zh/compile_local.md)编译Paddle2ONNX。
如果你希望对 Paddle2ONNX 进行二次开发,请按照[Github 源码安装方式](docs/zh/compile_local.md)编译Paddle2ONNX。

# 4 快速使用教程

Expand All @@ -28,18 +31,16 @@ Paddle2ONNX 在导出模型时,需要传入部署模型格式,包括两个

- `model_name.pdmodel`: 表示模型结构
- `model_name.pdiparams`: 表示模型参数
[注意] 这里需要注意,两个文件其中参数文件后辍为 `.pdiparams`,如你的参数文件后辍是 `.pdparams`
,那说明你的参数是训练过程中保存的,当前还不是部署模型格式。 部署模型的导出可以参照各个模型套件的导出模型文档。

## 4.2 调整Paddle模型

如果对Paddle模型的输入输出需要做调整,可以前往[Paddle 相关工具](./tools/paddle/README.md)查看教程。

## 4.3 命令行转换
## 4.3 使用命令行转换 PaddlePaddle 模型

使用如下命令将Paddle模型转换为ONNX模型
你可以通过使用命令行并通过以下命令将Paddle模型转换为ONNX模型

```
```bash
paddle2onnx --model_dir saved_inference_model \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
Expand Down Expand Up @@ -67,7 +68,7 @@ paddle2onnx --model_dir saved_inference_model \

## 4.4 裁剪ONNX

如果你需要调整 ONNX 模型,请参考如下工具:[ONNX 相关工具](./tools/onnx/README.md)
如果你需要调整 ONNX 模型,请参考 [ONNX 相关工具](./tools/onnx/README.md)

## 4.5 优化ONNX

Expand All @@ -79,7 +80,7 @@ python -m paddle2onnx.optimize --input_model model.onnx --output_model new_model

# 5 代码贡献

繁荣的生态需要大家的携手共建,开发者可以参考 [Paddle2ONNX 贡献指南](./docs/zh/Paddle2ONNX_Development_Guide.md) 来为 Paddle2ONNX贡献代码
繁荣的生态需要大家的携手共建,开发者可以参考 [Paddle2ONNX 贡献指南](./docs/zh/Paddle2ONNX_Development_Guide.md) 来为 Paddle2ONNX 贡献代码

# 6 License

Expand All @@ -88,7 +89,4 @@ Provided under the [Apache-2.0 license](https://github.com/PaddlePaddle/paddle-o
# 7 捐赠

* 感谢 PaddlePaddle 团队提供服务器支持 Paddle2ONNX 的 CI 建设。
* 感谢社区用户 [chenwhql](https://github.com/chenwhql)[luotao1](https://github.com/luotao1)
[goocody](https://github.com/goocody)[jeff41404](https://github.com/jeff41404)
[jzhang553](https://github.com/jzhang533)[ZhengBicheng](https://github.com/ZhengBicheng)
于2024年03月28日向 Paddle2ONNX PMC 捐赠共 10000 元人名币用于 Paddle2ONNX 的发展。
* 感谢社区用户 [chenwhql](https://github.com/chenwhql), [luotao1](https://github.com/luotao1), [goocody](https://github.com/goocody), [jeff41404](https://github.com/jeff41404), [jzhang553](https://github.com/jzhang533), [ZhengBicheng](https://github.com/ZhengBicheng) 于2024年03月28日向 Paddle2ONNX PMC 捐赠共 10000 元人名币用于 Paddle2ONNX 的发展。
121 changes: 50 additions & 71 deletions README_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,106 +2,85 @@

[简体中文](README.md) | English

## 🆕 New open source project FastDeploy
If the purpose of your conversion is to deploy TensorRT, OpenVINO, ONNX Runtime, the current PaddlePaddle provides [FastDeploy] (https://github.com/PaddlePaddle/FastDeploy), which supports 150+ models to be directly deployed to these engines, Paddle2ONNX The conversion process also no longer needs to be explicitly called by the user, helping everyone to solve various tricks and alignment problems during the conversion process.
# 1 Introduction

- Welcome Star🌟 [https://github.com/PaddlePaddle/FastDeploy](https://github.com/PaddlePaddle/FastDeploy)
- [Use ONNX Runtime to deploy Paddle model C++ & Python](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/runtime)
- [Use OpenVINO to deploy Paddle model C++ & Python](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/runtime)
- [Use TensorRT to deploy Paddle model C++ & Python](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/runtime)
- [PaddleOCR Model Deployment C++ & Python](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/ocr)
- [PaddleDetection Model Deployment C++ & Python](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/detection/paddledetection)
Paddle2ONNX supports the conversion of PaddlePaddle model format to ONNX model format. Through ONNX, it is possible to deploy Paddle models to various inference engines, including TensorRT/OpenVINO/MNN/TNN/NCNN, as well as other inference engines or hardware that support the ONNX open-source format.

## Introduction
# 2 Paddle2ONNX Environment Dependencies

Paddle2ONNX supports converting **PaddlePaddle** model format to **ONNX** model format. The deployment of the Paddle model to a variety of inference engines can be completed through ONNX, including TensorRT/OpenVINO/MNN/TNN/NCNN, and other inference engines or hardware that support the ONNX open source format.
Paddle2ONNX itself does not depend on other components, but we recommend using Paddle2ONNX in the following environments:

Thanks to [EasyEdge Team](https://ai.baidu.com/easyedge/home) for contributing Paddle2Caffe, which supports exporting the Paddle model to Caffe format. For installation and usage, please refer to [Paddle2Caffe](Paddle2Caffe).
- PaddlePaddle == 2.6.0
- onnxruntime >= 1.10.0

## Model Zoo
Paddle2ONNX has built a model zoo of paddle popular models, including PicoDet, OCR, HumanSeg and other domain models. Developers who need it can directly download and use them. Enter the directory [model_zoo](./model_zoo) for more details!
# 3 Install Paddle2ONNX

## Environment dependencies

- none

## Install
If you only want to install Paddle2ONNX without the need for secondary development, you can quickly install Paddle2ONNX by executing the following code.

```
pip install paddle2onnx
```

- [Github source installation method](docs/zh/compile.md)
If you wish to conduct secondary development on Paddle2ONNX, please follow the [GitHub source code installation method](docs/en/compile_local.md) to compile Paddle2ONNX.

## use
# 4 Quick Start Tutorial

### Get the PaddlePaddle deployment model
## 4.1 Get the PaddlePaddle Deployment Model

When Paddle2ONNX exports the model, it needs to pass in the deployment model format, including two files
- `model_name.pdmodel`: Indicates the model structure
- `model_name.pdiparams`: Indicates model parameters
[Note] It should be noted here that the suffix of the parameter file in the two files is `.pdiparams`. If the suffix of your parameter file is `.pdparams`, it means that your parameters are saved during the training process and are not currently deployed. model format. The export of the deployment model can refer to the export model document of each model suite.

## 4.2 Adjusting Paddle Models

### Command line conversion
If adjustments to the input and output of Paddle models are needed, you can visit the [Paddle related tools](./tools/paddle/README.md) for tutorials.

```
## 4.3 Using Command Line to Convert PaddlePaddle Models

You can convert Paddle models to ONNX models using the command line with the following command:

```bash
paddle2onnx --model_dir saved_inference_model \
--model_filename model.pdmodel \
--params_filename model.pdiparams\
--save_file model.onnx \
--enable_dev_version True
```
#### Parameter options
| Parameter |Parameter Description |
|----------|--------------|
|--model_dir | Configure directory path containing Paddle models|
|--model_filename |**[Optional]** Configure the filename to store the network structure under `--model_dir`|
|--params_filename |**[Optional]** Configure the name of the file to store model parameters under `--model_dir`|
|--save_file | Specify the converted model save directory path |
|--opset_version | **[Optional]** Configure the OpSet version converted to ONNX, currently supports multiple versions such as 7~16, the default is 9 |
|--enable_dev_version | **[Optional]** Whether to use the new version of Paddle2ONNX (recommended), the default is True |
|--enable_onnx_checker| **[Optional]** Configure whether to check the correctness of the exported ONNX model, it is recommended to turn on this switch, the default is False|
|--enable_auto_update_opset| **[Optional]** Whether to enable the opset version automatic upgrade function, when the lower version of the opset cannot be converted, automatically select the higher version of the opset for conversion, the default is True|
|--deploy_backend |**[Optional]** Inference engine for quantitative model deployment, supports onnxruntime, tensorrt or others, when other is selected, all quantization information is stored in the max_range.txt file, the default is onnxruntime |
|--save_calibration_file |**[Optional]** TensorRT 8.X version deploys the cache file that needs to be read to save the path of the quantitative model, the default is calibration.cache |
|--version |**[Optional]** View paddle2onnx version |
|--external_filename |**[Optional]** When the exported ONNX model is larger than 2G, you need to set the storage path of external data, the recommended setting is: external_data |
|--export_fp16_model |**[Optional]** Whether to convert the exported ONNX model to FP16 format, and use ONNXRuntime-GPU to accelerate inference, the default is False |
|--custom_ops |**[Optional]** Export Paddle OP as ONNX's Custom OP, for example: --custom_ops '{"paddle_op":"onnx_op"}, default is {} |

- Use ONNXRuntime to validate converted models, please pay attention to install the latest version (minimum requirement 1.10.0)

### Other optimization tools
1. If you need to optimize the exported ONNX model, it is recommended to use `onnx-simplifier`, or you can use the following command to optimize the model
```
python -m paddle2onnx.optimize --input_model model.onnx --output_model new_model.onnx
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_file model.onnx
```

2. If you need to modify the input shape of the model exported to ONNX, such as changing to a static shape
```
python -m paddle2onnx.optimize --input_model model.onnx \
--output_model new_model.onnx \
--input_shape_dict "{'x':[1,3,224,224]}"
```
The adjustable conversion parameters are listed in the following table:

| Parameter | Parameter Description |
|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| --model_dir | Configure directory path containing Paddle models |
| --model_filename | **[Optional]** Configure the filename to store the network structure under `--model_dir` |
| --params_filename | **[Optional]** Configure the name of the file to store model parameters under `--model_dir` |
| --save_file | Specify the converted model save directory path |
| --opset_version | **[Optional]** Configure the OpSet version converted to ONNX, currently supports multiple versions such as 7~16, the default is 9 |
| --enable_onnx_checker | **[Optional]** Configure whether to check the correctness of the exported ONNX model, it is recommended to turn on this switch, the default is False |
| --enable_auto_update_opset | **[Optional]** Whether to enable the opset version automatic upgrade function, when the lower version of the opset cannot be converted, automatically select the higher version of the opset for conversion, the default is True |
| --deploy_backend | **[Optional]** Inference engine for quantitative model deployment, supports onnxruntime, tensorrt or others, when other is selected, all quantization information is stored in the max_range.txt file, the default is onnxruntime |
| --save_calibration_file | **[Optional]** TensorRT 8.X version deploys the cache file that needs to be read to save the path of the quantitative model, the default is calibration.cache |
| --version | **[Optional]** View paddle2onnx version |
| --external_filename | **[Optional]** When the exported ONNX model is larger than 2G, you need to set the storage path of external data, the recommended setting is: external_data |
| --export_fp16_model | **[Optional]** Whether to convert the exported ONNX model to FP16 format, and use ONNXRuntime-GPU to accelerate inference, the default is False |
| --custom_ops | **[Optional]** Export Paddle OP as ONNX's Custom OP, for example: --custom_ops '{"paddle_op":"onnx_op"}, default is {} |

## 4.4 Pruning ONNX

3. If you need to crop the Paddle model, solidify or modify the input Shape of the Paddle model, or merge the weight files of the Paddle model, please use the following tools: [Paddle-related tools](./tools/paddle/README.md)
If you need to adjust ONNX models, please refer to [ONNX related tools](./tools/onnx/README.md)

4. If you need to crop the ONNX model or modify the ONNX model, please refer to the following tools: [ONNX related tools](./tools/onnx/README.md)
## 4.5 Optimize ONNX

5. For PaddleSlim quantization model export, please refer to: [Quantization Model Export ONNX](./docs/zh/quantize.md)
If you have optimization needs for the exported ONNX model, we recommend using `onnx-simplifier`. You can also optimize the model using the following command.

### Paddle2ONNX with VisualDL service
# 5 Code Contribution

VisualDL has deployed the model conversion tool on the website to provide services. You can click [Service Link] (https://www.paddlepaddle.org.cn/paddle/visualdl/modelconverter/) to perform online Paddle2ONNX model conversion.
A thriving ecosystem requires everyone's collaborative efforts. Developers can refer to the [Paddle2ONNX Contribution Guide](./docs/zh/Paddle2ONNX_Development_Guide.md) to contribute code to Paddle2ONNX.

![Paddle2ONNX](https://user-images.githubusercontent.com/22424850/226798785-33167569-4bd0-4b00-a5c0-5d6642cd6751.gif)
# 6 License

## Documents
Provided under the [Apache-2.0 license](https://github.com/PaddlePaddle/paddle-onnx/blob/develop/LICENSE).

- [model zoo](docs/en/model_zoo.md)
- [op list](docs/en/op_list.md)
- [update notes](docs/en/change_log.md)
# 7 捐赠

## License
[Apache-2.0 license](https://github.com/PaddlePaddle/paddle-onnx/blob/develop/LICENSE).
* Thanks to the PaddlePaddle team for providing server support for the CI infrastructure of Paddle2ONNX.
* Thanks to community users [chenwhql](https://github.com/chenwhql), [luotao1](https://github.com/luotao1), [goocody](https://github.com/goocody), [jeff41404](https://github.com/jeff41404), [jzhang553](https://github.com/jzhang533), [ZhengBicheng](https://github.com/ZhengBicheng) for donating a total of 10,000 RMB to the Paddle2ONNX PMC on March 28, 2024, for the development of Paddle2ONNX.
File renamed without changes.

0 comments on commit 5ec9257

Please sign in to comment.