Comments (10)
输出的数量确实与张量的元素数量相同,但是却总是一个正常数后面跟一个0地输出,这明显不是原始图像经过缩放和正规化的结果
from mnn.
如下是精减的的示例,从代码看应该输出原图像的值,但结果明显不是啊
Matrix trans1;
trans1.setIdentity();
ImageProcess::Config iprscfg1;
iprscfg1.filterType = BILINEAR;
iprscfg1.sourceFormat = GRAY;
iprscfg1.destFormat = GRAY;
auto process1 = ImageProcess::create(iprscfg1);
process1->setMatrix(trans1);
vector<int> shape{1,1,1920,1080};
auto nchwTensor = Tensor::create(shape, halide_type_of<uint8_t>(), static_cast<void*>(pimg), Tensor::CAFFE);
auto code = process1->convert(pimg, width, height, width, nchwTensor);
assert(code==0);
nchwTensor->print();
from mnn.
打印 nchwTensor ,不要打印 detTensor 。里面涉及nc4hw4布局,具体可以看 faq
https://mnn-docs.readthedocs.io/en/latest/faq.html
from mnn.
打印 nchwTensor ,不要打印 detTensor 。里面涉及nc4hw4布局,具体可以看 faq https://mnn-docs.readthedocs.io/en/latest/faq.html
我精减的示例里面就是nchwTensor,可输出还是一样啊,能否指出哪里有问题吗?谢谢!
from mnn.
打印 nchwTensor ,不要打印 detTensor 。里面涉及nc4hw4布局,具体可以看 faq https://mnn-docs.readthedocs.io/en/latest/faq.html
成功了,谢谢!
问题的根源很诡异,
auto nchwTensor = Tensor::create(shape, halide_type_of<uint8_t>(), pimg, Tensor::CAFFE);
auto code = process1->convert(pimg, width, height, width, nchwTensor);
与
auto nchwTensor = Tensor::create(shape, halide_type_of<uint8_t>(), nullptr, Tensor::CAFFE);
auto code = process1->convert(pimg, width, height, width, nchwTensor);
上面是错的,下面是对的,也就是说无法直接初始化。
from mnn.
@jxt1234
真是......
我刚看到成功输出了原始数据,一改就又出现不一致的结果了,而且程序就那么几行,还有Demo程序作参考,而且,为了确认原因,我已经在demo程序上做了几处修改包括使用Tensor而不是ImageProcess,还把uint8_t改成了float,又用循环一个一个把张量里面的数据取出来改回uint8,也都成功了,可我自己的程序就是昙花一现,再也改不出来了。
from mnn.
vector<int> shape{1,1,height,width};
std::shared_ptr<Tensor> wrapTensor(Tensor::create(shape, halide_type_of<int8_t>(), nullptr, Tensor::CAFFE));
auto code = process1->convert(pimg, width, height, width, wrapTensor.get());
assert(code==0);
for(int i=0; i<width*height; i++){
cout << static_cast<int>(pimg[i]) << ", ";
cout << static_cast<int>(wrapTensor->host<uint8_t>()[i]) << ", ";
}
原始数据和从张量取出来的竟然不一样,数据处理是
CV::Matrix trans1;
trans1.setIdentity();
// //Dst -> [0, 1]
// trans.postScale(1.0/size_w, 1.0/size_h);
// //Flip Y (因为 FreeImage 解出来的图像排列是Y方向相反的)
// trans.postScale(1.0,-1.0, 0.0, 0.5);
//[0, 1] -> Src
// trans1.postScale(320, 320);
CV::ImageProcess::Config iprscfg1;
iprscfg1.filterType = CV::NEAREST;
// float mean[1] = {128.94f};
// float normals[1] = {0.227f};
// ::memcpy(iprscfg1.mean, mean, sizeof(mean));
// ::memcpy(iprscfg1.normal, normals, sizeof(normals));
iprscfg1.sourceFormat = CV::GRAY;
iprscfg1.destFormat = CV::GRAY;
iprscfg1.wrap = CV::ZERO;
process1 = CV::ImageProcess::create(iprscfg1);
process1->setMatrix(trans1);
from mnn.
原本正确的程序,只是加了缩放,输出就全变成0了,而且没有任何预警
int main(int argc, const char* argv[]) {
if (argc < 4) {
printf("Usage: ./pictureRotate.out input.jpg angle output.jpg\n");
return 0;
}
auto inputPatch = argv[1];
auto angle = ::atof(argv[2]);
auto destPath = argv[3];
int width, height, channel;
auto inputImage = stbi_load(inputPatch, &width, &height, &channel, 4);
uint8_t p[width*height];
for(int i=0; i<width*height; i+=1){
// std::cout << (int)inputImage[i*4] << ",";
p[i] = inputImage[i*4];
}
std::cout << std::endl;
MNN_PRINT("size: %d, %d\n", width, height);
Matrix trans;
trans.setScale(1.0 / (width - 1), 1.0 / (height - 1));
trans.postRotate(-angle, 0.5, 0.5);
trans.postScale((width/4), (height/4));
ImageProcess::Config config;
config.filterType = BILINEAR;
config.sourceFormat = GRAY;
config.destFormat = GRAY;
config.wrap = ZERO;
std::shared_ptr<ImageProcess> pretreat(ImageProcess::create(config), ImageProcess::destroy);
pretreat->setMatrix(trans);
{
std::shared_ptr<Tensor> wrapTensor(ImageProcess::createImageTensor<float>(width/4, height/4, 1, nullptr), MNN::Tensor::destroy);
// std::vector<int> shape{1, 1, height/4, width/4};
// std::shared_ptr<Tensor> wrapTensor(Tensor::create(shape, halide_type_of<float>(), nullptr, Tensor::CAFFE));
pretreat->convert((uint8_t*)p, width/4, height/4, width/4, wrapTensor.get());
uint8_t pp[width*height/4/4];
for(int i=0; i<width*height/4/4; ++i){
pp[i] = wrapTensor->host<float>()[i];
// std::cout << p[i] << ", ";
if (wrapTensor->host<float>()[i] != 0)
std::cout << wrapTensor->host<float>()[i] << ", ";
}
stbi_write_jpg(argv[3], width/4, height/4, 1, pp, 100);
}
stbi_image_free(inputImage);
return 0;
}
用替换工具把所有的 ‘/4’ 全部替换成‘’
又会变得一切正常
from mnn.
std::shared_ptr<Tensor> wrapTensor(Tensor::create(shape, halide_type_of<int8_t>(), nullptr, Tensor::CAFFE));
这种不支持作为 imageprocess 的输出,要么就用 float ,要么用 uint8_t , Tensor::TENSORFLOW
from mnn.
原来是这样啊。
不知道使用这个MNN::CV是否比自己写的单cpu的快
from mnn.
Related Issues (20)
- 从.onnx转成.mnn之后不能训练
- 环境变量设置推理线程数
- pymnn inference quality is unstable HOT 6
- 部署在安卓端的多分类分割任务,为什么输出的是一个一维的元组呀,有没有什么好方法能够实现分割结果的mask HOT 1
- OpenCLBackend.cpp里调用createOpenCLSymbolsOperatorSingleInstance()生成了gInstance,在哪里释放 HOT 1
- MNN推理的内存/cpu使用率 HOT 1
- MNN模型resizeSession之后,推理结果出现较大误差 HOT 11
- 手机端OpenCL 在Normal和Low精度下图像推理结果错误(全黑) HOT 2
- MNN内存泄漏检查 HOT 2
- MNN Llama-3-8B-Instruct export 失败 HOT 1
- Cut MNN build size according to operators to use HOT 2
- 添加完Mutlihead attention 算子后,报错Reshape41 算子异常 HOT 1
- meta-llama3-8b-instruct 使用llm_demo 推理报错 HOT 2
- onnx转mnn之后模型推理和原模型差距太大 HOT 3
- phi2 使用 llm_demo 推理报错 HOT 2
- 有转换mobilenet V3成功的老铁吗?转为mnn之后准确率下降很多 HOT 1
- 量化后输出怎么从float改成int8
- 求助!!!!!!!!打印 input_tensor内容全部都是0 HOT 1
- fastOnnxTest 成功 但使用时输出不一致 HOT 2
- 编译示例工程代码报错
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mnn.