Giter Club home page Giter Club logo

prophet's Introduction

PROPHET - Neural Network Library

Linux Windows Codecov Coveralls Docs Crates.io
travis appveyor codecov coveralls docs crates.io

A simple neural net implementation written in Rust with a focus on cache-efficiency and sequential performance.

Currently only supports supervised learning with fully connected layers.

How to use

The preferred way to receive prophet is via cargo or github.

Compile prophet with

cargo build

Run the test suite with

cargo test --release

Note: It is recommended to use --release for testing since optimizations are insanely effective for prophet.

For additional information while running some long tests use

cargo test --release --verbose -- --nocapture

Run performance test with

cargo bench --features benches

Planned Features

  • Convolutional Layers: Foundations have been layed out already!
  • GPGPU Support by Vulkano
  • Even more flexible learning methods

License

Licensed under either of

at your option.

Dual licence: badge badge

Release Notes (YYYY/MM/DD)

0.4.2 (2017/10/13)

  • Relicensed the library under the dual license model where the user can choose between MIT or APACHE version 2.0.
  • Improved performance of learning algorithms by up to 27%*. (*Tested on my local machine.)
  • Updated ndarray from 0.10.10 to 0.10.11 and itertools from 0.6.5 to 0.7.0.
  • Relaxed dependency version constraints for rand, num, log and ndarray.
  • Usability: Added a HOW TO USE section to the README.
  • Dev
    • Added some unit tests for NeuralNet components for improved stability and maintainability.

0.4.1 (2017/08/27)

  • Fixed long-standing undeterministic bug.
  • Reverted ChaChaRng usage in NeuralLayer::random - it is much faster and ChaChaRng's safety is not needed.

0.4.0 (2017/08/09)

  • Updated ndarray dependency version from 0.9 to 0.10
  • Updated serde dependency version from 0.9 to 1.0
  • Enabled serde feature by default.
  • NeuralLayer::random now uses ChaChaRng internally instead of weak_rng
  • Devel:
    • travisCI now using new trusty environment
    • travisCI now uploads code coverage to coveralls and codecov.io
    • travisCI no longer requires sudo

prophet's People

Contributors

robbepop avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

lamaboy2018

prophet's Issues

Write unittests for ...

Write remaining unittests for the new (and old) components of this library.

Missing unittests are:

  • utils.rs:
    • LearnRate
    • LearnMomentum
  • nn.rs:
    • NeuralNet
  • topology_v4.rs:
    • FullyConnectedLayer
    • ActivationLayer
    • AnyLayer
    • LayerSize
    • Topology
    • InitializingTopology
  • layer:
    • ActivationLayer
    • FullyConnectedLayer
    • ContainerLayer
    • AnyLayer
      • utils module:
        • buffer_base module
        • matrix_base module
  • trainer:
    • utils module:
      • MeanSquaredError
    • mentor module:
      • Context
      • Mentor
      • InitializingMentor

Rayon based parallel execution

Implement parallel execution based on rayon via ndarray provided wrappers.

This is the initial step towards full parallel execution.

Refactor layer sizes to be of equal sizes

The entire neural network concrete layers were designed to be very memory efficient which resulted in a design with assymetric layer sizes (because of the way bias neurons are represented). This has led to several issues and implementation difficulties preventing the usage of more efficiently implemented lower-level algorithms provided by ndarray and other libraries.
This refactoring of layer sizes also helps with implementing the new layer and topology layer types to path the way for Convolutional Layers in the end.

Redesign topology builder

The current topology builder is not flexible enough to use it for building up complex neural networks involving different layer types such as Convolutonal Layers.

The new topology build infrastructure should enforce a mirroring between abstract topology layers and concrete neural network layers.

Some thoughts on this matter has been made, results so far are:

  • There could be a need of some kind of InputLayer in topology structure to represent the initial layer.
  • Fully connected layers could be renamed to DenseLayers which is shorter.
  • There is a need for a generic ActivationLayer that is only responsible for serving the activation function. For identity activation functions this has the effect of simply leaving the activation layer away.
  • At first DenseLayer and ActivationLayer are fit to replace the current system. However, this structure makes it possible to later add layer types for convolutional layer computation to the set such as 2-dimensional PoolingLayer and ConvolutionLayer.
  • For interoperability between convolutional layers (2D) and normal layers (1D) there needs to be a kind of conversion between the 2-dimensional layers and the 1-dimensional layers.

Make 0.5 master, and 0.4 a branch for better visibility.

Right now the project looks dead on Github since there wasn't any activity on master for ~1 year, for most code parts even ~2 years.

That makes it hard to attract more developers (or even to explain to your colleagues that the project is not totally abandoned ...)

I think it would make sense to make next the new master for better visibility.

Less panics, more Results

Currently Prophet handles too many (potential) errors with panics instead of proper Rust-style error handling via Result return type and proper error kinds etc.
This also leads to less testable code and prevents implementing better error reporting strategies and should be addressed.

Branch next already implements (decent) error kinds and a rust-style Error struct with typical interface impls. The next step is to adjust the rest of the code base to this format away from asserts.

Create utility abstractions to better distribute invariants

Create utility abstractions that help to provide guarantees for invariants throughout the execution of the program. Also testability should be significantly improved by partitioning the program complexity into smaller tiles.

This includes but is not limited to following utility components:

  • WeightsMatrix & DeltaWeightsMatrix: Abstraction built on top of Array2<f32> to more semantically provide an interface for weights matrices mainly used by FullyConnectedLayer.
  • SignalBuffer: Abstraction to model signals instead of using raw Array1<f32>.
  • ErrorSignalBuffer: Same as with SignalBuffer but specialized for gradients in gradient descent computation.

Certain layer kinds:

  • FullyConnectedLayer: Abstracts mechanics of fully connected (dense) layers.
  • ActivationLayer: A layer for activation application.
  • ContainerLayer: A layer that contains other layers in a strictly sequential order.
  • Layer, including variants for FullyConnectedLayer and ActivationLayer

Core functionality & utilities:

  • NeuralNet: Implements the concrete interface for neural networks and contains layers that describe its computation.

Also model some traits that each stand uniquely for an important operation and behaviour shared by those abstractions:

  • ProcessInputSignal: For internal (layer-based) feed forward operations.
  • HasOutputSignal: For layers with output signals. In theory every layer requires this.
  • CalculateOutputErrorSignal: For layers that work with gradient descent learning.
  • HasErrorSignal: See above ...
  • PropagateErrorSignal: See above ...
  • ApplyErrorSignalCorrection: See above ...
  • SizedLayer: Interface for working with input and output size of layers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.