Giter Club home page Giter Club logo

fast-weights's Introduction

Hi, I'm Goku Mohandas

I create platforms that enable people to solve problems.

🔥 We're among the top MLOps repositories on GitHub

Connect with me via   Twitter or   LinkedIn
  to Made With ML for monthly updates on new content!


fast-weights's People

Contributors

gokumohandas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fast-weights's Issues

License?

Would you mind adding a license to this repo?

Possible bug on how to initialise A

In model.py:

self.A = tf.add(tf.scalar_mul(self.l, self.A),
                         tf.scalar_mul(self.e, tf.batch_matmul(tf.transpose(self.h_s, [0, 2, 1]), self.h_s)))

Should not it be:
self.e * A + self.l *

Thanks!

Argument count from model.step doesn't match dictionary unpack in train

In train line 113 the call to model.step() needs to unpack 5 variables, but only unpacks 4. Results in ValueError below (in Python 2.7).

Traceback (most recent call last):
  File "train.py", line 279, in <module>
    train(FLAGS)
  File "train.py", line 117, in train
    FLAGS.l, FLAGS.e, forward_only=False)
ValueError: too many values to unpack

Patched in pull request #1

Layer normalization not done correctly

In case of RNN-LN-FW model mean and standard deviation are incorrectly calculated as they would be for batch normalization:

mu = tf.reduce_mean(self.h_s, reduction_indices=0) # each sample
sigma = tf.sqrt(tf.reduce_mean(tf.square(self.h_s - mu), reduction_indices=0))

self.h_s has shape [batch_size, 1, num_hidden]

Whereas in the RNN-LN model they are calculated correctly as:

mean, var = tf.nn.moments(inputs, [1], keep_dims=True)

inputs has shape [batch_size, num_hidden]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.