Comments (10)
I faced the problem of the pytorch -> onnx -> tensorrt
approach as above. I used simplifier, it helped, but I found a new problem: the output is different for different batch size for the trt engine. Fix the interpolation instead of using the simplifier in
Pytorch_Retinaface/models/net.py
Line 89 in b984b4b
Pytorch_Retinaface/models/net.py
Line 93 in b984b4b
F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode="nearest")
=> F.interpolate(output3, size=(int(output2.size(2)), int(output2.size(3))), mode="nearest")
from pytorch_retinaface.
can you export onnx?
It's easy. I'll provide it later .
from pytorch_retinaface.
@biubug6 If you could do it, that would be amazing!
from pytorch_retinaface.
@biubug6 Do you have any timetable for the onnx model please?
from pytorch_retinaface.
@SnowRipple @wangpupanjing So sorry to have kept you waiting. Now I provide script "convert_to_onnx.py" to export onnx.
from pytorch_retinaface.
I think I had the same error, the problem was with onnx not pytorch, just updated onnx (pytorch can have different onnx version) ;)
from pytorch_retinaface.
Yes, but it was a while ago so I don't remember specifics - it is known problem with onnx, but there was simple fix - try google;)
from pytorch_retinaface.
can you tell me how to load .onnx model with tensor RT , thank you very much.
I load the model like this :
std::string dataDirs = "E:/gc/Pytorch_Retinaface-master";
std::vectorstd::string dir;
dir.push_back(dataDirs);
auto parsed = parser->parseFromFile(
locateFile("FaceDetector.onnx", dir).c_str(), static_cast(gLogger.getReportableSeverity()));
if (!parsed)
{
return false;
}
but,I get this errors:
sorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:11] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:11] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 106 [Upsample]:
ERROR: builtin_op_importers.cpp:3240 In function importUpsample:
[8] Assertion failed: scales_input.is_weights()
from pytorch_retinaface.
I think torch2trt is better than going pytorch -> onnx -> tensorrt
from pytorch_retinaface.
[01/07 / 2020-19:19:11] [W] [TRT] onnx2trt_utils.cpp:198:您的ONNX模型是使用INT64权重生成的,而TensorRT本身不支持INT64。尝试转换为INT32。
解析节点号106 [Upsample]时:
错误:builtin_op_importers.cpp:3240在函数importUpsample中:
[8]断言失败:scales_input.is_weights(),同样的这个错怎么解决
from pytorch_retinaface.
Related Issues (20)
- Is it ok if we upload your models to Zenodo and distribute them?
- Fine-tuning Resnet 50 model
- Unable to find a compatible Visual Studio installation
- How to fit non-squared input?
- The form of bboxes is wrong!!!
- Mesh decoder HOT 1
- About the ratioHard Example Mining HOT 3
- Why can't we evaluate during the training? HOT 1
- Pretrained Model HOT 1
- [Refactor] Acclerate training based on MMEngine :rocket:
- How to train with custom dataset by using the pretrained model?
- Dataset
- Evaluation 评估失败,在widerface的三个子集上map值都为0 HOT 1
- What maximum FPS have you achieved?
- Why does the forward pass time become shorter with iterations?
- Why loop 100 times while testing begin in detect.py?
- 用celeba数据集训练的模型,摄像头测试时小脸的框会变大框不准 HOT 1
- How can I train using pth pretrained file? (For transfer learning)
- 用预训练的权重直接训练,为什么loss会这么高
- C++ and TensorRT implementation of yolov5face yolov7face yolov8face
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch_retinaface.