Giter Club home page Giter Club logo

mace-models's People

Contributors

dustless avatar edvardhua avatar lee-bin avatar liyinhgqw avatar llhe avatar lu229 avatar lydoc avatar nolanliou avatar tonymou avatar yejw5 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mace-models's Issues

fast-style-transfer模型的计算用时

请问fast-style-transfer模型在GPU下处理一张480*640*3的图像大约需要多少时间?

我试过用tensorflow lite部署这个模型,用GPU跑一张480*640*3的风格迁移大概需要6s。因为tensorflow lite不支持在GPU上计算instance norm,所以非常耗时,mace存在这个问题吗?

Support object detection models

Hi mace team, about object detection models:
(1) ssd-mobilenetv1 only support caffe model, do you currently support tensorflow model?
(2) Do you support other more advanced object detection models, like ssd with more complicated backbone, e.g., resnet50, or even faster rcnn models?

.yml中只有input和output的placeholder?

自训练模型过程中添加了keep_prob的placeholder,生成.pb也没问题,但是:
python tools/python/convert.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml
时出现:
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'is_training' with dtype bool [[Node: is_training = Placeholder[dtype=DT_BOOL, shape=<unknown>, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

请问如何解决?

DeepLab Test

Hi. nice work

I will try to run my own deeplabv3plus model on mace framework. So I want to know what was model changes for mace.

I saw your description about deeplabv3plus model

  • input_tensors: sub_7
    -output_tensors: ResizeBilinear_2
    -input_shapes: 1,513,513,3
    -output_shapes: 1,65,65,21

I understood original deeplab v3 plus was below.

  • input_tensors: ImageTensor
    -output_tensors: SementicPredictions
    -input_shapes: 1,513,513,3
    -output_shapes: 1,513,513,21

What do you fix for mace framework? Can you tell me about that? or your training code.

if you use "transform_graph", please share me command.

run on PC

Hello. I wonder whether the converted models can be run on the PC. If so, are there any examples for deployment?

YOLOV3模型裁剪三个输出

你好 , 我想请教一下YOLOV3 是如何裁剪出三个输出的,我想用同样的方法在tensorflow上提取出mobilenet-ssd的主分支,贵司是否有实现的模型呢

Support for tf.keras models(models from micro-models)

I was exploring some of the models given in this repository. But I can't seem to convert the micro-models/keras folder's model. As given in the readme I am using TensorFlow >= 2.0.0.
I tried various approaches, trained a small model in colab, but there seem to be issues anyways.

When I am directly using mnist.yml from the folder I am facing this error.

  File "tools/python/convert.py", line 282, in <module>
    convert(conf, flags.output, flags.enable_micro)
  File "tools/python/convert.py", line 89, in convert
    net_def_with_Data = convert_net(net_name, net_conf, enable_micro)
  File "tools/python/convert.py", line 226, in convert_net
    option, conf["model_file_path"])
  File "/home/hitech/aware/mace/tools/python/transform/keras_converter.py", line 205, in __init__
    compile=False)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py", line 184, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 178, in load_model_from_hdf5
    custom_objects=custom_objects)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config
    return deserialize(config, custom_objects=custom_objects)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 109, in deserialize
    printable_module_name='layer')
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 373, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/engine/sequential.py", line 398, in from_config
    custom_objects=custom_objects)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 109, in deserialize
    printable_module_name='layer')
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 375, in deserialize_keras_object
    return cls.from_config(cls_config)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize_wrapper.py", line 206, in from_config
    layer = tf.keras.layers.deserialize(config.pop('layer'))
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 109, in deserialize
    printable_module_name='layer')
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 375, in deserialize_keras_object
    return cls.from_config(cls_config)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 655, in from_config
    return cls(**config)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/layers/convolutional.py", line 599, in __init__
    **kwargs)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/layers/convolutional.py", line 125, in __init__
    **kwargs)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/training/tracking/base.py", line 456, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 294, in __init__
    generic_utils.validate_kwargs(kwargs, allowed_kwargs)
  File "/home/hitech/anaconda3/envs/mace/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 792, in validate_kwargs
    raise TypeError(error_message, kwarg)
TypeError: ('Keyword argument not understood:', 'groups')

The colab link has the same code as the code given in the micro-models/keras folder :- https://colab.research.google.com/drive/1_GhO4805M_aSk1kIOVDFlVA-b1nGusBi?usp=sharing

When I am using the model from the above link I am getting the following error :-

File "tools/python/convert.py", line 282, in <module>
    convert(conf, flags.output, flags.enable_micro)
File "tools/python/convert.py", line 89, in convert
    net_def_with_Data = convert_net(net_name, net_conf, enable_micro)
File "tools/python/convert.py", line 238, in convert_net
    output_graph_def, quantize_activation_info = mace_transformer.run()
File "/home/hitech/aware/mace/tools/python/transform/transformer.py", line 166, in run
    changed = transformer()
File "/home/hitech/aware/mace/tools/python/transform/transformer.py", line 1623, in sort_by_execution
    "output_tensor %s not existed in model" % output_node)
File "/home/hitech/aware/mace/tools/python/utils/util.py", line 76, in mace_check
    for line in traceback.format_stack():
ERROR: /home/hitech/aware/mace/tools/python/transform/transformer.py:1623: output_tensor output not existed in model

I verified my model using netron, and everything seems fine.
my yml file

library_name: mnist_madhu
target_abis: [arm64-v8a]
model_graph_format: file
model_data_format: file
models:
  mnist:
    platform: keras
    model_file_path: /home/hitech/aware/mace/examples/android-custom-demo-MACE/models/keras/mnist_madhu/mnist_layers_named.h5
    model_sha256_checksum: d64999fc571fe5725fb87427dfbcf8e757010883039b06728f5929e7088f447d
    subgraphs:
      - input_tensors:
          - input
        input_shapes:
          - 1,28,28
        input_ranges:
          - 0,1
        output_tensors:
          - output
        output_shapes:
          - 1,10
    runtime: cpu+gpu
    limit_opencl_kernel_time: 0
    nnlib_graph_mode: 0
    obfuscate: 0
    winograd: 0

Can anyone confirm if keras models are working properly in their systems ?? If yes, please share your environments. And please let me know what I am doing wrong.

Any suggestions are appreciated.

sre_16, Exception: Unexpected fc input ndim.

Hi, guys~ I got an Error when I tried to convert the sre16 onnx model to MACE format model.
Exception: Unexpected fc input ndim.

The command I used is:
python tools/converter.py convert --config=../mace-models/kaldi-models/nnet3/sre_16.yml
The configuration file sre_16.yml is exactly the same with the one provided by this repo.

Thank you for your time!!!

The full traceback:

Transform model to one that can better run on device
onnx model IR version:  5
constains ops domain:  ai.kaldi.dnn version: 7
Traceback (most recent call last):
  File "/datasdc_3421/cgh/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/converter.py", line 414, in <module>
    main(unused_args=[sys.argv[0]] + unparsed)
  File "/datasdc_3421/cgh/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/converter.py", line 227, in main
    output_graph_def = converter.run()
  File "/datasdc_3421/cgh/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/converter_tool/onnx_converter.py", line 451, in run
    self.convert_ops(graph_def)
  File "/datasdc_3421/cgh/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/converter_tool/onnx_converter.py", line 527, in convert_ops
    self._op_converters[node.op_type](node)
  File "/datasdc_3421/cgh/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/converter_tool/onnx_converter.py", line 1065, in convert_gemm
    "Unexpected fc input ndim.")
  File "/datasdc_3421/cgh/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/convert_util.py", line 20, in mace_check
    raise Exception(msg)
Exception: Unexpected fc input ndim.
Traceback (most recent call last):
  File "tools/converter.py", line 1346, in <module>
    flags.func(flags)
  File "tools/converter.py", line 853, in convert_func
    convert_model(configs, flags.cl_mem_type)
  File "tools/converter.py", line 782, in convert_model
    ",".join(model_config.get(YAMLKeyword.graph_optimize_options, [])))
  File "/datasdc_3421/cgh/mace/tools/sh_commands.py", line 549, in gen_model_code
    _fg=True)
  File "/root/.pyenv/versions/3.6.3/lib/python3.6/site-packages/sh.py", line 1413, in __call__
    raise exc
sh.ErrorReturnCode_1: 

  RAN: /root/.pyenv/versions/3.6.3/bin/python bazel-bin/mace/python/tools/converter -u --platform=onnx --model_file=/datasdc_3421/cgh/kaldi-onnx/sre16_optimized.onnx --weight_file= --model_checksum=d5f04ec3c7b0fcfdebb491154b3edc2d28a3c7f3c7895816efec96d5a08743ad --weight_checksum= --input_node=input --input_data_types=float32 --input_data_formats=NHWC --output_node=tdnn6.affine --output_data_types=float32 --output_data_formats=NHWC --check_node= --runtime=cpu --template=mace/python/tools --model_tag=sre16 --input_shape=1, 100, 23 --input_range= --output_shape=1, 100, 512 --check_shape= --dsp_mode=0 --embed_model_data=False --winograd=0 --quantize=0 --quantize_range_file= --change_concat_ranges=0 --obfuscate=0 --output_dir=mace/codegen/models/sre16 --model_graph_format=file --data_type=fp32_fp32 --graph_optimize_options= --cl_mem_type=image

changing the input size of yolov3

Hi, I want to change the input size of yolov3 to 320x224 (h,w). After modified the yolo-v3.yml to:

        input_shapes:
          # - 1,416,416,3
          - 1,320,224,3 
        output_shapes:
          # - 1,13,13,255
          # - 1,26,26,255
          # - 1,52,52,255
          - 1,10, 7,255
          - 1,20,14,255
          - 1,40,28,255

converting the model will raise these errors:

python tools/converter.py convert --config=../mace-models/yolo-v3/yolo-v3.yml

....

Traceback (most recent call last):
  File "/miniconda3/envs/mace/lib/python3.7/site-packages/tensorflow/python/framework/importer.py", line 426, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 26 and 20. Shapes are [1,26,26] and [1,20,14]. for 'concatenate_1/concat' (op: 'ConcatV2') with input shapes: [1,26,26,256], [1,20,14,512], [] and with computed input tensors: input[2] = <3>.

How to solve these errors? Thank you.

detection_output.cc is missing

I wonder where can I find detection_output.cc or any other examples showing how to use the converted mobilenet-ssd model?

Problems when trying convert inception-v1

When I try to convert inception-v1 from tensorflow model zoo (https://github.com/tensorflow/models/tree/master/research/slim), Errors occurs as flowing:
`
Traceback (most recent call last):
File "/home/xcw/software/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/converter.py", line 319, in
main(unused_args=[sys.argv[0]] + unparsed)
File "/home/xcw/software/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/converter.py", line 194, in main
memory_optimizer.optimize_gpu_memory(output_graph_def)
File "/home/xcw/software/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/memory_optimizer.py", line 300, in optimize_gpu_memory
mem_optimizer.optimize()
File "/home/xcw/software/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/memory_optimizer.py", line 148, in optimize
op.output_shape[i].dims)
File "/home/xcw/software/mace/bazel-bin/mace/python/tools/converter.runfiles/mace/mace/python/tools/memory_optimizer.py", line 243, in get_op_mem_block
raise Exception('output shape dim size is not 2 or 4.')
Exception: output shape dim size is not 2 or 4.
Traceback (most recent call last):
File "tools/converter.py", line 1730, in
flags.func(flags)
File "tools/converter.py", line 820, in convert_func
convert_model(configs)
File "tools/converter.py", line 745, in convert_model
",".join(model_config.get(YAMLKeyword.graph_optimize_options, [])))
File "/home/xcw/software/mace/tools/sh_commands.py", line 532, in gen_model_code
_fg=True)
File "/home/xcw/anaconda2/lib/python2.7/site-packages/sh.py", line 1413, in call
raise exc
sh.ErrorReturnCode_1:

RAN: /home/xcw/anaconda2/bin/python bazel-bin/mace/python/tools/converter -u --platform=tensorflow --model_file=/home/xcw/datasets/tf_models/google_inception/frozen_inception_v1.pb --weight_file= --model_checksum=d75481a8bcaedafdf217bb2bf9ee5a3da750b4b0fd7a76b571fdf818e0cd4709 --weight_checksum= --input_node=input --output_node=InceptionV1/Logits/Conv2d_0c_1x1/convolution --runtime=gpu --template=mace/python/tools --model_tag=mobilenet_v1 --input_shape=1,224,224,3 --dsp_mode=0 --embed_model_data=False --winograd=0 --quantize=0 --quantize_range_file= --obfuscate=0 --output_dir=mace/codegen/models/mobilenet_v1 --model_graph_format=file --data_type=fp16_fp32 --graph_optimize_options=

STDOUT:

STDERR:
`

By inseting print at memory_optimizer.py:242, I found the the problem op is Elewise and it shape is [64].

Do anybody knows how to solve this problem?
Thanks!

deploy Yolov3 with mace

Hi mace team, I have a yolov3 model trained with own dataset (9 classes). I understand I need to first convert the yolo model to tf format, and then convert the tf model to mace model format. My questions are:
(1) Do you prefer any darknet -> tf conversion tool? I tried multiple tools but different tools generate significantly different pb model structure. Kindly suggest the [official] tools you are using to convert darknet yolo model to tf model format.
(2) To deploy yolov3 with mace, besides yml file, is there any other staff I need to configure? Could you provide a simple case/demo where yolov3 is deployed?

configure output for yolov3 model

Hi mace team, I am trying to deploy yolov3 model with mace. As indicated in yml file, the output tensors is configured as follows:

output_tensors:
- conv2d_59/BiasAdd:0
- conv2d_67/BiasAdd:0
- conv2d_75/BiasAdd:0

However, in the android tutorial "mace/examples/android/macelibrary/src/main/cpp/image_classify.cc"

struct ModelInfo {
std::string input_name;
std::string output_name;
std::vector<int64_t> input_shape;
std::vector<int64_t> output_shape;
};

The output_name is defined as a string, so how to set input_name? Use one of the three layer names or use all of them? I tried conv2d_59/BiasAdd:0, conv2d_67/BiasAdd:0 and conv2d_75/BiasAdd:0 individually but the model always crashed during run time:

A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xc8680000 in tid 16879 (jniThread), pid 16852 (iaomi.mace.demo)

Please kindly advise on this issue.

build source code in Mac, build Error

ERROR: Config value android is not defined in any .rc file
where the value android defined
I use export ANDROID_NDK_HOME=/Users/chenjiao04/Documents/android-ndk-r15c
but it doesn't work

yolo v3的fps

1: 首先感谢 mace团队,研发出这么好的框架。
2: 请问各位大佬,model-zoo里的yolo-v3模型,诸位在android下测试过吗?如果测试过,fps大概
是多少?我自己想要在高通845的板子上转化一版yolo-v3,想问下在mace框架下的fps。
3: 我看mace支持的op好像没有LeakyRelu,请问贵团队能否提供自定义的LeakyRelu?

希望大佬赐教啊。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.