Giter Club home page Giter Club logo

aams's Introduction

Attention-aware Multi-stroke Style Transfer

This is the official Tensorflow implementation of our paper:

Attention-aware Multi-stroke Style Transfer, CVPR 2019. [Project] [arXiv]

Yuan Yao, Jianqiang Ren, Xuansong Xie, Weidong Liu, Yong-Jin Liu, Jun Wang

Overview

This project provides an arbitrary style transfer method that achieves both faithful style transfer and visual consistency between the content and stylized images. The key idea of the proposed method is to employ self-attention mechanism, multi-scale style swap and a flexible stroke pattern fusion strategy to smoothly and adaptably apply suitable stroke patterns on different regions. In this manner, the synthesized images of our method can be more visually pleasing and generated in one feed-forward pass.

Examples

Prerequisites

  • Python (version 2.7)
  • Tensorflow (>=1.4)
  • Numpy
  • Matplotlib

Download

  • MSCOCO dataset is applied for the training of the proposed self-attention autoencoder.
  • Pre-trained VGG-19 model.

Usage

Test

Make sure there exists a sub-folder named test_result under images folder, then run

$ python test.py --model tf_model/aams.pb \
                        --content images/content/lenna_cropped.jpg \
                        --style images/style/candy.jpg \
                        --inter_weight 1.0

both of the stylized image and the attention map will be generated in test_result.

Our model is trained with tensorflow 1.4.

Train

Download the MSCOCO dataset and filter out images with unsuitable format(grayscale,etc) by running

$ python filter_training_images.py --dataset datasets/COCO_Datasets/val2014

then

$ python train.py --dataset datasets/COCO_Datasets/val2014

Citation

If our work is useful for your research, please consider citing:

@inproceedings{yao2019attention,
    title={Attention-aware Multi-stroke Style Transfer},
    author={Yao, Yuan and Ren, Jianqiang and Xie, Xuansong and Liu, Weidong and Liu, Yong-Jin and Wang, Jun},
    booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2019}
}

License

© Alibaba, 2019. For academic and non-commercial use only.

Acknowledgement

We express gratitudes to the style-agnostic style transfer works including Style-swap, WCT and Avatar-Net, as we benefit a lot from both their papers and codes.

Contact

If you have any questions or suggestions about this paper, feel free to contact Yuan Yao or Jianqiang Ren.

aams's People

Contributors

jianqiangren avatar yaoyuan13 avatar dehezhang2 avatar kevinrsx avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.