Giter Club home page Giter Club logo

icnn's Introduction

Input Convex Neural Networks (ICNNs)

This repository is by Brandon Amos, Leonard Xu, and J. Zico Kolter and contains the TensorFlow source code to reproduce the experiments in our ICML 2017 paper Input Convex Neural Networks.

If you find this repository helpful in your publications, please consider citing our paper.

@InProceedings{amos2017icnn,
  title = {Input Convex Neural Networks},
  author = {Brandon Amos and Lei Xu and J. Zico Kolter},
  booktitle = {Proceedings of the 34th International Conference on Machine Learning},
  pages = {146--155},
  year = {2017},
  volume = {70},
  series = {Proceedings of Machine Learning Research},
  publisher = {PMLR},
}

Setup and Dependencies

  • Python/numpy
  • TensorFlow (we used r10)
  • OpenAI Gym + Mujoco (for the RL experiments)

Libraries

lib
└── bundle_entropy.py - Optimize a function over the [0,1] box with the bundle entropy method.
                        (Development is still in-progress and we are still
                        fixing some numerical issues here.)

Synthetic Classification

This image shows FICNN (top) and PICNN (bottom) classification of synthetic non-convex decision boundaries.

synthetic-cls
├── icnn.py - Main script.
├── legend.py - Create a figure of just the legend.
├── make-tile.sh - Make the tile of images.
└── run.sh - Run all experiments on 4 GPUs.

Multi-Label Classification

(These are currently slightly inconsistent with our paper and we plan on synchronizing our paper and code.)

multi-label-cls
├── bibsonomy.py - Loads the Bibsonomy datasets.
├── ebundle-vs-gd.py - Compare ebundle and gradient descent.
├── ff.py - Train a feed-forward net baseline.
├── icnn_ebundle.py - Train an ICNN with the bundle entropy method.
├── icnn.back.py - Train an ICNN with gradient descent and back differentiation.
└── icnn.plot.py - Plot the results from any multi-label cls experiment.

Image Completion

This image shows the test set completions on the Olivetti faces dataset over the first few iterations of training a PICNN with the bundle entropy method for 5 iterations.

completion
├── icnn.back.py - Train an ICNN with gradient descent and back differentiation.
├── icnn_ebundle.py - Train an ICNN with the bundle entropy method.
├── icnn.plot.py - Plot the results from any image completion experiment.
└── olivetti.py - Loads the Olivetti faces dataset.

Reinforcement Learning

Training

From the RL directory, run a single experiment with:

python src/main.py --model ICNN --env InvertedPendulum-v1 --outdir output \
  --total 100000 --train 100 --test 1 --tfseed 0 --npseed 0 --gymseed 0
  • Use --model to select a model from [DDPG, NAF, ICNN].
  • Use --env to select a task. TaskList
  • View all of the parameters with python main.py -h.

Output

The TensorBoard summary is on by default. Use --summary False to turn it off. The TensorBoard summary includes (1) average Q value, (2) loss function, and (3) average reward for each training minibatch.

The testing total rewards are logged to log.txt. Each line is [training_timesteps] [testing_episode_total_reward].

Settings

To reproduce our experiments, run the scripts in the RL directory.

Acknowledgments

The DDPG portions of our RL code are from Simon Ramstedt's SimonRamstedt/ddpg repository.

Licensing

Unless otherwise stated, the source code is copyright Carnegie Mellon University and licensed under the Apache 2.0 License. Portions from the following third party sources have been modified and are included in this repository. These portions are noted in the source files and are copyright their respective authors with the licenses listed.

Project License
SimonRamstedt/ddpg MIT

icnn's People

Contributors

leix28 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

icnn's Issues

Help needed with Proposition 1 of your paper

Hi,
I have trouble understanding the proof of Proposition 1 of your paper (https://arxiv.org/pdf/1609.07152.pdf). Can you provide supplementary steps why a fully connected ICNN (defined in equation (2) of your paper) is convex. Especially, why W^(y)_i can have negative values?
For example, when setting all W^(z)_i = 0 I expect the network not to be convex in general.

I would appreciate any help. Thanks.

Data Files

Where can I get the CSV files for bibtex data ?

Could you give us the allcode?

dear teacher, I am a Chinese student. I am surprised and admire your research achievements . After reading your paper, I have benefited a lot. I want to learn more about your technology and apply it to wind turbine control. It's still in the learning phase, but the code can't run because it's true that the files are missing. ''No such file or directory: 'output.random-search/bestParams.json''

Can you update the code and give the complete code? We guarantee that we will not spread it out, that we will quote your achievements in the paper, and that our achievements will be shared with you. Thank you very much.

code is not understandable

the code define the f function as :

def f_ficnn(self, x, y, reuse=False):
    fc = tflearn.fully_connected
    xy = tf.concat((x, y), 1)

    prevZ = None
    for i, sz in enumerate([200, 200, 1]):
        z_add = []

        with tf.variable_scope('z_x{}'.format(i)) as s:
            z_x = fc(xy, sz, reuse=reuse, scope=s, bias=True)
            z_add.append(z_x)

        if prevZ is not None:
            with tf.variable_scope('z_z{}_proj'.format(i)) as s:
                z_z = fc(prevZ, sz, reuse=reuse, scope=s, bias=False)
                z_add.append(z_z)

        if sz != 1:
            z = tf.nn.relu(tf.add_n(z_add))
        prevZ = z

    return tf.contrib.layers.flatten(z)

it has two input : x and y. I think there should be only one input and y is output. Do I misunderstand something?

tensorflow has no attribute 'mul'

tf.mul is now deprecated, and needs to be replaced with tf.multiply.
Use tf.initializers.variance_scaling instead with distribution=uniform to get equivalent behavior. Traceback (most recent call last): File "icnn.back.py", line 421, in <module> main() File "icnn.back.py", line 104, in main model = Model(inputSz, outputSz, sess, args.nGdIter) File "icnn.back.py", line 131, in __init__ E0_ = self.f(self.x_, self.y0_) File "icnn.back.py", line 330, in f z_yu = conv(tf.mul(y_red, yu_u), nFilter, kSz, strides=strides, AttributeError: module 'tensorflow' has no attribute 'mul'

Edit: Here is a list of deprecated functions and their updated variants (for tf >= 1.0 users):

  • tf.merger_all_summaries => tf.summary.merge_all
  • tf.histogram_summary => tf.summary.histogram
  • tf.scalar_summary => tf.summary.scalar
  • tf.mul => tf.multiply
  • tf.train.SummaryWriter => tf.summary.FileWriter

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.