Giter Club home page Giter Club logo

nnet's Introduction

NNet Repository

Build Status Coverage Status

Introduction

The .nnet file format for fully connected ReLU networks was originially created in 2016 to define aircraft collision avoidance neural networks in a human-readable text document. Since then it was incorporated into the Reluplex repository and used to define benchmark neural networks. This format is a simple text-based format for feed-forward, fully-connected, ReLU-activated neural networks. It is not affiliated with Neuroph or other frameworks that produce files with the .nnet extension.

This repository contains documentation for the .nnet format as well as useful functions for working with the networks. The nnet folder contains example neural network files. The converters folder contains functions to convert the .nnet files to Tensorflow, ONNX, and Keras formats and vice-versa. The python, julia, and cpp folders contain python, julia, and C++ functions for reading and evaluating .nnet networks. The examples folder provides python examples for using the available functions.

This repository is set up as a python package. To run the examples, make sure that the folder in which this repository resides (the parent directory of NNet) is added to the PYTHONPATH environment variable.

File format of .nnet

The file begins with header lines, some information about the network architecture, normalization information, and then model parameters. Line by line:

1: Header text. This can be any number of lines so long as they begin with "//"
2: Four values: Number of layers, number of inputs, number of outputs, and maximum layer size
3: A sequence of values describing the network layer sizes. Begin with the input size, then the size of the first layer, second layer, and so on until the output layer size
4: A flag that is no longer used, can be ignored
5: Minimum values of inputs (used to keep inputs within expected range)
6: Maximum values of inputs (used to keep inputs within expected range)
7: Mean values of inputs and one value for all outputs (used for normalization)
8: Range values of inputs and one value for all outputs (used for normalization)
9+: Begin defining the weight matrix for the first layer, followed by the bias vector. The weights and biases for the second layer follow after, until the weights and biases for the output layer are defined.

The minimum/maximum input values are used to define the range of input values seen during training, which can be used to ensure input values remain in the training range. Each input has its own value.

The mean/range values are the values used to normalize the network training data before training the network. The normalization substracts the mean and divides by the range, giving a distribution that is zero mean and unit range. Therefore, new inputs to the network should be normalized as well, so there is a mean/range value for every input to the network. There is also an additional mean/range value for the network outputs, but just one value for all outputs. The raw network outputs can be re-scaled by multiplying by the range and adding the mean.

Writing .nnet files

In the utils folder, the file writeNNet.py contains a python method for writing neural network data to a .nnet file. The main method, writeNNet, requires a list of weights, biases, minimum input values, maximum input values, mean of inputs/ouput, and range of inputs/output, and a filename to write the neural network.

Loading and evaluating .nnet files

There are three folders for C++, Julia, and Python examples. Each subfolder contains a nnet.* file that contains functions for loading the network from a .nnet file and then evaluating a set of inputs given the loaded model. There are examples in each folder to demonstrate how the functions can be used.

License

This code is licensed under the MIT license. See LICENSE for details.

nnet's People

Contributors

dependabot[bot] avatar kjulian3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

nnet's Issues

Node operation type Gemm not supported

Hi,

This is my Pytorch model exported to ONNX.

class Net(nn.Module):
    def __init__(self):
        super(iris_net, self).__init__()
        self.fc1 = nn.Linear(4, 4, bias=True)
        self.fc2 = nn.Linear(4, 4, bias=True)
        self.fc3 = nn.Linear(4, 3, bias=True)

    def forward(self, X):
        X = F.relu(self.fc1(X))
        X = F.relu(self.fc2(X))
        X = self.fc3(X)
        return X

However, the converter returns me the error as

Node operation type Gemm not supported

Thanks a lot for your help!

Conversion Keras model to NNET: bug in writeNNet?

I'm trying to convert a .model file created using Keras into .nnet.
I used keras2nnet.py with my own .model file, and I get the following error:

Traceback (most recent call last):
File "keras2nnet.py", line 33, in
writeNNet(weights,biases,inputMins,inputMaxes,means,ranges,nnetFile)
File "/home/nathanael/NNet/utils/writeNNet.py", line 76, in writeNNet
f2.write("%.5e," % w[i][j]) #Five digits written. More can be used, but that requires more more space.
TypeError: only size-1 arrays can be converted to Python scalars

Thanks for your help!

Length of Means and Ranges

Hi Kyle,

I have been running into issues with these 2 lines:

f2.write(','.join(str(means[i]) for i in range(inputSize+1)) + ',\n') #Means for normalizations
f2.write(','.join(str(ranges[i]) for i in range(inputSize+1)) + ',\n') #Ranges for noramlizations

I believe ranges and means should have the same length as inputSize (without the +1) or am I missing something?

Additionally, this comment:

# Mean and range values for normalizing the inputs and outputs. All outputs are normalized with the same value

I believe should only refer to input and not outputs. I believe only inputs would be normalized.

Thank you!

NNet class vs writeNNet format class

I think the list/array of weights created by the NNet class constructor has a different shape than the one expected in writeNNet function. In any case, when I try to write the net from an NNet object to a file using writeNNet, I get an error.

You are welcome to have a look at test/TestNnetExtensions.py in my fork repository. Note that I wrote a new constructor (that gets attributes as arguments -- seems more convenient, certainly for my purposes) and turned the original constructor (from a file) into a class method; but I did not touch the code of that method. I considered changing the shape in NNet class, making it more consistent, but that would also affect the evaluation methods, hence complicating things in other ways.

nnet output layer

It seems to me that in the evaluation methods, you do not assume that there is an output layer, consisting of ReLU neurons -- is this on purpose?

Branch "split" in my repository contains the file python/nnet_extensions.py that implements a new functionality (splitting a network into two by "cutting" it after a given layer). I added to the NNet class methods that evaluate the network with no normalization, and with a ReLU output layer, and this kind of evaluation works well with the new functionality. But if the evaluation was made on purpose with no output layer, I will have to change my implementation of the split.

Thanks!

Unable to read model

Getting the following while trying to load a model:
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
weights, biases = readNNet(r'C:\Users\Saikat.Sarkar\Documents\FakeImageDetection-master\FakeImageDetection-master\nnet\CNN2.nnet')
File "", line 22, in readNNet
line = f.readline()
File "C:\Program Files\Python36\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 3333: character maps to

The model works well in Java

Trying to implement this project in python:
https://github.com/afsalashyana/FakeImageDetection

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.