lucassheng / avatar-net Goto Github PK
View Code? Open in Web Editor NEWAvatar-Net: Multi-scale Zero-shot Style Transfer by Feature Decoration
Home Page: https://lucassheng.github.io/avatar-net/
Avatar-Net: Multi-scale Zero-shot Style Transfer by Feature Decoration
Home Page: https://lucassheng.github.io/avatar-net/
I try to run this program,but fail...
Finish loading the model [AvatarNet] configuration
Traceback (most recent call last):
File "evaluate_style_transfer.py", line 163, in
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "evaluate_style_transfer.py", line 121, in main
checkpoint_dir, slim.get_model_variables(), ignore_missing_vars=True)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/framework/python/ops/variables.py", line 571, in assign_from_checkpoint_fn
reader = pywrap_tensorflow.NewCheckpointReader(model_path)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 110, in NewCheckpointReader
return CheckpointReader(compat.as_bytes(filepattern), status)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/compat.py", line 65, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got None
could you give me some suggestion?
I have downloaded the trained model of Avatar-Net. then run evaluate_style_transfer.sh, but fail.
My tensorflow version is 1.8.
error log:
/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/h5py/init.py:34: FutureWarning: Conversion of the second argument of issubdtype from float
to np.floating
is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type
.
from ._conv import register_converters as _register_converters
Finish loading the model [AvatarNet] configuration
Traceback (most recent call last):
File "evaluate_style_transfer.py", line 163, in
tf.app.run()
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "evaluate_style_transfer.py", line 112, in main
inter_weight=FLAGS.inter_weight)
File "/ai/zhyx/docker/avatar-net-master/models/avatar_net.py", line 94, in transfer_styles
style, self.network_name)
File "/ai/zhyx/docker/avatar-net-master/models/losses.py", line 85, in extract_image_features
inputs, spatial_squeeze=False, is_training=False, reuse=reuse)
File "/ai/zhyx/docker/avatar-net-master/models/vgg.py", line 226, in vgg_19
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 2060, in repeat
outputs = layer(outputs, *args, **kwargs)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1027, in convolution
outputs = layer.apply(inputs)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 503, in apply
return self.call(inputs, *args, **kwargs)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 443, in call
self.build(input_shapes[0])
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/layers/convolutional.py", line 137, in build
dtype=self.dtype)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 383, in add_variable
trainable=trainable and self.trainable)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1065, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 962, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 360, in get_variable
validate_shape=validate_shape, use_resource=use_resource)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1561, in layer_variable_getter
return _model_variable_getter(getter, *args, **kwargs)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1553, in _model_variable_getter
custom_getter=getter, use_resource=use_resource)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/variables.py", line 261, in model_variable
use_resource=use_resource)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/variables.py", line 216, in variable
use_resource=use_resource)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 352, in _true_getter
use_resource=use_resource)
File "/root/anaconda3/envs/tf17_py36/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 682, in _get_single_variable
"VarScope?" % name)
ValueError: Variable vgg_19/conv1/conv1_1/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
i have some question about avatar-net.
Hi, I was looking into the evaluation script you provided. You mention in README that in AvatarNet.transfer_styles(self, inputs, styles, inter_weight, intra_weights)
, the styles
is argument that can take a list of style images. However it is instantiated in the model as a placeholder in https://github.com/LucasSheng/avatar-net/blob/master/evaluate_style_transfer.py#L95
.
As the code is written for doing style transfer for only 1 image, it works. However when I pass multiple images, it fails with a lot of issues even though the code takes care of listifying the style images.
Is there any usable code for interpolation/mixing multiple styles?
For speech audio signal, voice conversion is more and more popular. I wonder if the zero-shot style transfer learning can be used to voice conversion. For example, from a source speaker's voice(sv) to a target speaker's voice(tv). Extract the style(like prosody, stress, accent and so on) of sv and the content(timbre and characters) of tv, and mixed the style and content.
I really looking forward to your reply, thank you.
I am trying to train this network, but it seems I need a file called 'dataset_meta_data.txt'.
what's the function of this file? How could I get this file?
Hi ,
I was wondering how you measure total time for style transfer. I tried running it for 512X512 image and it gives execution time of 2.1 sec, instead of 0.28 sec in the paper.
Thanks for sharing this. I wanted to try running it on local GPU on Windows. Was able to get it to work with several tweaks. Posting in case anyone else wants to try.
windows fork: https://github.com/noido/avatar-net
edit details: https://github.com/noido/avatar-net/blob/master/readme_windows_tweaks.txt
The most notable obstacle was that the pretrained model download (Google Drive) linked in the repo was missing a checkpoint file to specify model_checkpoint_path. That caused a tensorflow function to return None instead of the correct model path, which caused a cascade of wonderful error messages down the line.
I cannot run your implementation. I suspect the problem is the weights. Can you save them in Saver V1 format?
I implemented Your algorithm in C# for Windows users. The code is located here: https://github.com/ColorfulSoft/Demos/tree/master/Style%20Transfer/2018.%20AvatarNet
Hi guys,
Just wondering where I should put the checkpoint files of your model....Also where should other models go from TF slim?
Cheers
Hmm, nice work - really.
But It's the first time when I'm not sure about real inputs.
Is my opinion is True, that the real image size, which is stylized in the network (style part) is about 512? And inputs just resized by bicubic layers before and after? Or I'm wrong?
This is a fast question. I will test the model on 512 and 4k_image_size, and maybe will see the answer.
Second question.
I freeze the graph and when I'm trying to infer that on GPU my kernel is dying. But CPU infer is work.
It's not a good way to install old Tf, for the reason that TF needs Cuda dll's, and I have the newest CUDA and NVIDIA Drivers.
(Infer is TensorFlow 2.x with Interactive session).
I try to run evaluate_style_transfer.py but fail....
Error msg:
style_image_features = losses.extract_image_features(style, self.network_name) in avatar_net.py
ValueError: Variable vgg_19/conv1/conv1_1/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
We've got an error while stopping in post-mortem: <class 'KeyboardInterrupt'>
I have download "model.ckpt-120000" from your GoogleDrive and vgg_19.ckpt
and set "checkpoint_path" in AvatarNet_config.yml to path of vgg_19.ckpt
but you seem never use checkpoint_path in this project...
any suggestion?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.