We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1. 环境(environment)
2. Github版本
3. 编译方式(compile method) CMake完整编译参数(full cmake arguments)为默认参数
5. 详细描述bug 情况 (Describe the bug) 在进行文本分类任务时,我这里将onnxruntime推理后端修改为了TNN,希望可以减少一些内存占用。但是将onnxruntime修改为TNN后,内存占用不降反增了。我这里模型大小约为3M,使用onnxruntime时,内存占用均值为22M,峰值为28M,但是切换为TNN后,内存占用稳定在30M,几乎看不到波动,请问这可能是什么原因导致的呢?能提供一些意见吗?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
1. 环境(environment)
2. Github版本
3. 编译方式(compile method)
CMake完整编译参数(full cmake arguments)为默认参数
5. 详细描述bug 情况 (Describe the bug)
在进行文本分类任务时,我这里将onnxruntime推理后端修改为了TNN,希望可以减少一些内存占用。但是将onnxruntime修改为TNN后,内存占用不降反增了。我这里模型大小约为3M,使用onnxruntime时,内存占用均值为22M,峰值为28M,但是切换为TNN后,内存占用稳定在30M,几乎看不到波动,请问这可能是什么原因导致的呢?能提供一些意见吗?
The text was updated successfully, but these errors were encountered: