Comments (7)
The freeze_all function is for transfer learning in the training process.
I haven't seen any mention of frozen graph in tensorflow 2.0, so it's probably deprecated now
the official guide recommends the SavedModel format
https://www.tensorflow.org/alpha/guide/saved_model
which is implemented in export_tfserving.py
SavedModel is in .pb format but it comes with extra files for the variables and parameters
from yolov3-tf2.
i see this function in utils.py file:
def freeze_all(model, frozen=True):
model.trainable = not frozen
if isinstance(model, tf.keras.Model):
for l in model.layers:
freeze_all(l, frozen)
is this code for frozen model? and how to freeze model?
from yolov3-tf2.
hello @zzh8829 thank for your help. i try to deploy serving model using tensorrt inference server. but i got this error:
2019-06-04 02:58:55.785173: W external/org_tensorflow/tensorflow/core/kernels/partitioned_function_ops.cc:197] Grappler optimization failed. Error: Op type not registered 'CombinedNonMaxSuppression' in bi
nary running on 1984ec4fe5aa. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.)
`tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2019-06-04 02:58:55.806209: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at partitioned_function_ops.cc:118 : Not found: Op type not registered 'CombinedNonMax
Suppression' in binary running on 1984ec4fe5aa. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib,
accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
So, Do you know why?
from yolov3-tf2.
hello @zzh8829 thank for your help. i try to deploy serving model using tensorrt inference server. but i got this error:
2019-06-04 02:58:55.785173: W external/org_tensorflow/tensorflow/core/kernels/partitioned_function_ops.cc:197] Grappler optimization failed. Error: Op type not registered 'CombinedNonMaxSuppression' in bi nary running on 1984ec4fe5aa. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed. 2019-06-04 02:58:55.806209: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at partitioned_function_ops.cc:118 : Not found: Op type not registered 'CombinedNonMax Suppression' in binary running on 1984ec4fe5aa. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
So, Do you know why?
That might be related to outdated tensorflow/serving,
try with newer version and see if that works:
docker pull tensorflow/serving:nightly-gpu
from yolov3-tf2.
thank @andydion. but i use tensorrt inference server instead of tf serving.
so i don't know how to fix this
from yolov3-tf2.
Hello @andydion i write a client code, but it doesn't work.
So, I need a help.
import time
from absl import app, flags, logging
from absl.flags import FLAGS
import cv2
import numpy as np
import tensorflow as tf
from grpc.beta import implementations
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
def main(_argv):
host = '0.0.0.0'
port = '8671'
tf_serving = '0.0.0.0:9000'
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'reader'
request.model_spec.signature_name = 'serving_default'
img = cv2.imread("1.jpg")
img = cv2.resize(img, (416, 416))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype('float32')
tensor = tf.contrib.util.make_tensor_proto(img, shape=[1]+list(img.shape))
request.inputs['input_1'].CopyFrom(tensor)
resp = stub.Predict(request, 30.0)
print("resp: ", resp)
if __name__ == '__main__':
try:
app.run(main)
except SystemExit:
pass
from yolov3-tf2.
Hi I think this is a issue with tensorrt compatibility, i recommend using tensorflow serving instead
from yolov3-tf2.
Related Issues (20)
- Converting a custom yolov3 model
- Increasing number of channels
- Train the model from random weights, 100 epochs DO NOT work HOT 2
- [Feature Request] Implementing YOLO v4 / v5 / .. ?
- Get negative value by calling model(dataset)
- anchors HOT 1
- model.fit() and eager_tf generates different training results HOT 2
- RTX 2080ti batch size 2..
- Invalid argument: Received a label value of 67 which is outside the valid range of [0, 1) HOT 1
- Error in colab notebook
- box caculation
- yolov3 tiny - evaluation on pretrained weights gives lower accuracy than expected HOT 1
- No detection when using GPU, but CPU works
- "Windows fatal exception: access violation" when run export_tflite.py
- Yolo loss binary_crossentropy version
- How to create tfrecord for coco dataset. HOT 1
- Windows - ERROR: No matching distribution found for tensorflow-gpu==2.1.0rc1 HOT 1
- 识别率
- oriented bounding boxes
- Detection box returns nan when running detect.py
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolov3-tf2.