Giter Club home page Giter Club logo

neuralnetworks's People

Contributors

bryant1410 avatar ivan-vasilev avatar linkerlin avatar saurabhsv avatar vojkog avatar zdgeorgiev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuralnetworks's Issues

How to run the project in Eclipse ?

I have downloaded the project into Eclipse, but how do you run it ? Do we have to convert to a maven project ? Can someone give me the detailed steps ?

UniqueList purpose?

Ivan,
What is the purpose of the UniqueList class? If you needed a Set that also preserves the order of insertion, you could use LinkedHashSet, otherwise just use HashSet?

failed test, RBMTest. testContrastiveDivergence during build

When do gradle build, gotten the following msg (on Ubuntu 12.10 32 bits):
com.amd.aparapi.KernelRunner warnFallBackAndExecute
WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.calculation.neuronfunctions.AparapiSigmoid$AparapiSigmoidFunction: CPU request can't be honored not CPU device

com.github.neuralnetworks.test.RBMTest > testContrastiveDivergence FAILED
java.lang.AssertionError: args should not be null
at com.amd.aparapi.KernelRunner.executeOpenCL(KernelRunner.java:1305)
at com.amd.aparapi.KernelRunner.execute(KernelRunner.java:1612)
at com.amd.aparapi.Kernel.execute(Kernel.java:1774)
at com.amd.aparapi.Kernel.execute(Kernel.java:1705)
at com.amd.aparapi.Kernel.execute(Kernel.java:1690)

Strange behavior when calculating Layers (probably Aparapi related)

Hi!
First of all I have no experience with GPU processing and my personal computer don't even have one.
I'm using your package inside a Cognitive Architecture project and at some point its involve calculate some inputs in a loop. There I call a self-made method that uses some lines from the "propagateForward".

The method is:
public void calculate(Matrix input){ Set<Layer> calculatedLayers = new UniqueList<Layer>(); calculatedLayers.add(mlp.getInputLayer()); activations.addValues(mlp.getInputLayer(), input); mlp.getLayerCalculator().calculate(mlp, mlp.getOutputLayer(), calculatedLayers, activations); }
And I call it with "TrainingInputProvider.getNextInput().getInput()" as parameter.

The problem is that after the first iteration (which seens to run without any issue) this "calculate" method thows a error:

Exception in thread "Thread-7" java.lang.UnsatisfiedLinkError: com.amd.aparapi.KernelRunner.runKernelJNI(JLcom/amd/aparapi/Range;ZI)I

and them the Thread is gone.

I feel that I'm doing something wrong as I think that it should OR work though all the loop OR not work at all.

Can you help me with this ?

Can the examples run without opencl.so ?

Okay I have downloaded the examples. But I keep getting the following messages in Eclipse-:

Check your environment. Failed to load aparapi native library aparapi_x86_64 or possibly failed to locate opencl native library (opencl.dll/opencl.so). Ensure that both are in your PATH (windows) or in LD_LIBRARY_PATH (linux).

Now I have set the aparapi_x86_64.so in my library path. But I have not downloaded OpenCL ? Do I need OpenCL to run the examples ? Is it absolutely neccessary ?

Execution mode GPU failed: OpenCL execution seems to have failed (runKernelJNI returned -51) com.aparapi.internal.exception.AparapiException: OpenCL execution seems to have failed (runKernelJNI returned -51)

Hi @ivan-vasilev !

I've been trying to compare your package to my CPU backend TensorFlow. It seems that my puny GPUs can't handle the MNIST example (I have both an on-board Intel one as well as an AMD Radeon one). Running the MNIST example in your package with a CPU backend works without a problem, but when I require that AParAPI use the GPU backend I get the following warning (fallBackToNextDevice):

WARNING: Execution mode GPU failed for AparapiBackpropReLU, modes=[AUTO], current = GPU: OpenCL execution seems to have failed (runKernelJNI returned -51)
com.aparapi.internal.exception.AparapiException: OpenCL execution seems to have failed (runKernelJNI returned -51)
	at com.aparapi.internal.kernel.KernelRunner.executeOpenCL(KernelRunner.java:1058)
	at com.aparapi.internal.kernel.KernelRunner.executeInternalInner(KernelRunner.java:1519)
	at com.aparapi.internal.kernel.KernelRunner.executeInternalOuter(KernelRunner.java:1180)
	at com.aparapi.internal.kernel.KernelRunner.execute(KernelRunner.java:1170)
	at com.aparapi.Kernel.execute(Kernel.java:2439)
	at com.aparapi.Kernel.execute(Kernel.java:2396)
	at com.aparapi.Kernel.execute(Kernel.java:2371)
	at com.github.neuralnetworks.util.KernelExecutionStrategy$GPUKernelExecution.execute(KernelExecutionStrategy.java:42)
	at com.github.neuralnetworks.calculation.neuronfunctions.AparapiFullyConnected.calculate(AparapiFullyConnected.java:151)
	at com.github.neuralnetworks.training.backpropagation.BackPropagationConnectionCalculatorImpl.calculate(BackPropagationConnectionCalculatorImpl.java:73)
	at com.github.neuralnetworks.calculation.LayerCalculatorBase.calculate(LayerCalculatorBase.java:44)
	at com.github.neuralnetworks.training.backpropagation.BackPropagationLayerCalculatorImpl.backpropagate(BackPropagationLayerCalculatorImpl.java:33)
	at com.github.neuralnetworks.training.backpropagation.BackPropagationTrainer.learnInput(BackPropagationTrainer.java:78)
	at com.github.neuralnetworks.training.OneStepTrainer.train(OneStepTrainer.java:44)
	at ml.sharony.ann.tf.examples.sentiment.BenchmarkTFCPU.test(BenchmarkTFCPU.java:134)

What does runKernelJNI returned -51 mean?

You can find the source code I'm running on my fork of your repo, under the benchmark-tf-cpu branch.

OpenCl problem

Hi,
What does one do with this? I'm trying to run some tests, I started with /neuralnetworks/nn-samples/src/test/java/com/github/neuralnetworks/samples/test/CifarTest.java test1 and this is what I get ->

Caused by: java.lang.IllegalArgumentException: Could not found resource cl/exboclkernels.cl in resource path
	at com.github.neuralnetworks.calculation.operations.opencl.OCL.CopyLibrary(OCL.java:155)
	at com.github.neuralnetworks.calculation.operations.opencl.OCL.loadNativeCodeFromJar(OCL.java:107)
	at com.github.neuralnetworks.calculation.operations.opencl.OCL.loadNativeCodeFromJar(OCL.java:59)
	at com.github.neuralnetworks.calculation.operations.opencl.OCL.<init>(OCL.java:38)
	at com.github.neuralnetworks.calculation.operations.opencl.OpenCLCore.<init>(OpenCLCore.java:35)
	at com.github.neuralnetworks.calculation.operations.opencl.OpenCLCore.<clinit>(OpenCLCore.java:15)

I run Aparapi examples on my computer and they work, so OpenCL is working.
Has anybody run into this?

With regards, Logi

DBN with softmaxlayer on top

Hi everyone,

i am trying to build a, DBN with two layers. the first one should be trained with CD, and the last one with BP. The lastone should act like a softmax layer, so it can be used for classification.

this is the code i wrote so far

public class MyTest {

public static void test() {
    Environment.getInstance().setUseDataSharedMemory(false);
    Environment.getInstance().setUseWeightsSharedMemory(false);

    //setup net
    DBN dbn = NNFactory.dbn(new int[] {4,2,2}, false);
    dbn.setLayerCalculator(NNFactory.lcWeightedSum(dbn, null));

    //get train an test dataset
    MyInputProvider trainSet = new MyInputProvider("0.train.data");
    MyInputProvider testSet = new MyInputProvider("0.test.data");

    //weights init
    NNRandomInitializer random = new NNRandomInitializer(new MersenneTwisterRandomInitializer());

    //setup trainer
    RBM firstRBM = dbn.getFirstNeuralNetwork();
    RBM secondRBM = dbn.getLastNeuralNetwork();
    secondRBM.setLayerCalculator(NNFactory.lcSoftRelu(secondRBM,null));
    AparapiCDTrainer firstTrainer = TrainerFactory.cdSigmoidBinaryTrainer(firstRBM, null, null, null, random, 0.5f, 0f, 0f, 0f, 1, 1, 5, true);
    BackPropagationTrainer secondTrainer = TrainerFactory.backPropagation(secondRBM, null, null, null, null, 0.5f, 0f, 0f, 0f, 1, 1,1, 5);
    //with random null pointer exeption
    //BackPropagationTrainer secondTrainer = TrainerFactory.backPropagation(secondRBM, null, null, null, random, 0.5f, 0f, 0f, 0f, 1, 1, 5, true);

    Map<NeuralNetwork, OneStepTrainer<?>> layerTrainers = new HashMap<>();
    layerTrainers.put(firstRBM, firstTrainer);
    layerTrainers.put(secondRBM, secondTrainer);

    DBNTrainer trainer = TrainerFactory.dbnTrainer(dbn,layerTrainers,trainSet,testSet,new MultipleNeuronsOutputError());

    //run training
    trainer.train();
    trainer.test();

    System.out.println(trainer.getOutputError().getTotalNetworkError());
}

}

is this the right way?

Simple forward example?

How is it possible to create some simple NN, for example, pretrained perceptron with given number of layers and all given weights, and execute it forward?

How can I train my net with deep learning ?

I read your amazing explanation on https://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networks.
I need your help I have ARFF file , I can change it to any other format like excel or csv , this file contains binary features(0,1) and two label classes (0,1) I want to use deep learning especially MLP and DBN .
Can you tell me in details how can I do that ?Is there a jar file to add in my project and sample code if available.
Thanks

Failed to load aparapi

I downloaded the latest code and imported the project as a Maven project into Eclipse. When I run JUnit test for AETest on my MacBook Pro (with OS X 10.9.2, i5 and Intel Iris 1024 MB), I got the following error:

TRAINING testAEBackpropagation...
Check your environment. Failed to load aparapi native library aparapi_x86_64 or possibly failed to locate opencl native library (opencl.dll/opencl.so). Ensure that both are in your PATH (windows) or in LD_LIBRARY_PATH (linux).
Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.calculation.neuronfunctions.AparapiSigmoid$AparapiSigmoidFunction: CPU request can't be honored not CPU device
Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.calculation.neuronfunctions.AparapiSigmoid$AparapiSigmoidFunction: CPU request can't be honored not CPU device
Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.training.backpropagation.BackPropagationSigmoid$AparapiBackpropSigmoid: CPU request can't be honored not CPU device
Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.training.backpropagation.BackPropagationSigmoid$AparapiBackpropSigmoid: CPU request can't be honored not CPU device
Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.training.backpropagation.BackPropagationSigmoid$AparapiBackpropSigmoid: CPU request can't be honored not CPU device
Apr 29, 2014 9:08:31 PM com.amd.aparapi.KernelRunner warnFallBackAndExecute
WARNING: Reverting to Java Thread Pool (JTP) for class com.github.neuralnetworks.training.backpropagation.BackPropagationSigmoid$AparapiBackpropSigmoid: CPU request can't be honored not CPU device

I cannot figure out this problem... May I know someone has ever experienced the same problem? Really need some help here. Thanks!

failed test, FFNNtest.testParallelNetworks during build

com.github.neuralnetworks.test.FFNNTest > testParallelNetworks FAILED
When do gradle build, gotten the following msg (on Ubuntu 12.10 32 bits):

java.lang.AssertionError: expected:<1.32> but was:<1.2000000476837158>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:494)
at org.junit.Assert.assertEquals(Assert.java:592)
at com.github.neuralnetworks.test.FFNNTest.testParallelNetworks(FFNNTest.java:365)

RBM test error is 100% in MnistTest

Why do I get a 100%error using the RBM test in MnistTest.java?

TESTING testRBM...
2.275 s total time
0.2275 s per minibatch of 10 mini batches
10000/10000 samples (1.0, 100.0%) error

Saving/Loading networks

As far as I can tell, it's not implimented, so I'm going to have to write some methods for it. So the question is, what would be the best practice way of doing so with this architecture?

Differences between Java1.7 and Java1.8 neyral net libraries

Hi Ivan,

  1. Is there any difference between 1.7 version of NN lib and the current one?
  2. Can you please explain how to map the output layer of autoencoder to lables? You mentioned that common practice is to extend the autoencoder with another fully connected layer (which matches the number of units with input layer). That means we know the output layer neuron values. How to tanslate them to labels?
  3. I am trying the 1.7ver library with some other data sets related to spam detection. But the accuracies are lower than what I get from NB or linear-svm lib logic regression. The training error itself I am getting is around 26%.

XOR test fails

I'm trying to learn to use this package. I've played with this for a couple of days. I think the following nn should work:

public static float XOR_INPUT[][] = {{0.0f, 0.0f}, {1.0f, 0.0f}, {0.0f, 1.0f}, {1.0f, 1.0f}};
public static float XOR_TARGET[][] = {{0.0f}, {1.0f}, {1.0f}, {0.0f}};

public static void main(String[] args) {

    // execution mode
    Environment.getInstance().setExecutionMode(EXECUTION_MODE.SEQ);

    // create the network
    NeuralNetworkImpl mlp = NNFactory.mlpSigmoid(new int[]{2, 3, 1}, false);

    // training and testing data providers
    SimpleInputProvider trainInputProvider = new SimpleInputProvider(XOR_INPUT, XOR_TARGET);
    SimpleInputProvider testInputProvider = new SimpleInputProvider(XOR_INPUT, XOR_TARGET);

    OutputError outputError = new XorOutputError();

    // trainer
    BackPropagationTrainer<?> bpt = TrainerFactory.backPropagation(mlp, 
            trainInputProvider, 
            testInputProvider, 
            outputError,
            new NNRandomInitializer(new MersenneTwisterRandomInitializer(-0.01f, 0.01f), 0.5f) 
            , 0.02f // learning rate
            , 0.7f // momentum
            , 0f // l1weightDecay
            , 0f // l2 weightDecay
            , 0f // dropout rate
            , 1 // training batch size
            , 1 // test batch size
            , 200);       // epochs

    // log data
    bpt.addEventListener(new LogTrainingListener(Thread.currentThread().getStackTrace()[1].getMethodName(), true, true));

    // early stopping
    bpt.addEventListener(new EarlyStoppingListener(testInputProvider, 100, 0.015f));

    // train
    bpt.train();

    // test
    bpt.test();
}

But I get 50% error!!! Changing the number of hidden neurons doesn't change anything.

What am I missing ????

Thanks in advance.

Dwight

OS Differences

I coded on my MacBook, the code was working well, but not pretty fast. So I switched to my Windows desktop PC with a GPU, but the code just wouldn't run. I'm getting

"Jan 31, 2017 4:32:05 PM com.amd.aparapi.internal.kernel.KernelRunner executeOpenCL
WARNUNG: ### CL exec seems to have failed. Trying to revert to Java ###"

every time I run the code. But minor changes will make the code work again.
Code:
int closest = -1;
Some loops and raytracing later...
if (closest > -1){ this.image[id] = 23; }
Will produce an error, but just
this.image[id] = 23;
without the conditionla statement works great. Please help me im confused!

Regards Julius

Sum Product Networks?

Ivan,
Congratulations, It was about time for someone to start a project like that!!! I have been investigating Aparapi for implementing Sum Product Networks, as SPNs seems to exhibit much better properties that RBMs or DBNs (you would not know it if you only listened to Prof. Hinton's classes:)

Questions:

  1. Do you plan on implementing SPNs in addition to RBMs and DBNs?
  2. Do you plan on switching to Java 8 soon? It seem that Java 8 lambdas+Aparapi are very suitable for the latest AMD Kaveri (and upcoming Berlin) APUs ?

Pozdravi,
Hristo Stoyanov

MnistTest.testLeNetSmall fails with 90% error rate

MnistTest.testLeNetSmall is failing with 90% error rate. testLeNetTiny and testLeNetTiny2 run fine with around 10% and 4% error respectively.

9020/10000 samples (0.902, 90.200005%) error

java.lang.AssertionError:
Expected :0.0
Actual :0.9020000100135803

neuralnetworks is 2000 times slower using GPU than Theano using CPU

At first I couldn't get GPU runtimes to be any faster than CPU runtimes on neuralnetworks but eventually I got the GPU run faster, but only by making huge networks that would take forever to complete for example I modified the testLenetSmall function to have this network:

NeuralNetworkImpl nn = NNFactory.convNN(new int[][] { { 28, 28, 1 }, { 5, 5, 120, 1 }, { 2, 2 }, { 5, 5, 120, 1 }, { 2, 2 },  { 3, 3, 120, 1 }, { 2, 2 }, {2048}, {2048}, {10} }, true);

Basically I added a 3rd convolutional net, bumped up the number of filters in in all covnets to 120 (from 20 and 50), quadrupled the neurons in the final hidden layer and added another hidden layer with 2048 neurons. The GPU enabled version runs about 2.4 times faster, but it's still dog slow taking something like 12 - 14 seconds per batch (the batch size is 1) so training the entire dataset of 60000 images would take 8.3 to 9.7 days. So like 10 days per epoch on the GPU. Meanwhile I built a comparable network in Lasagne/Theano and it takes around 420 seconds per epoch on the CPU (in a VM at that) which is about 2000 times faster.

Train method fails in MultipleNeuronsOutputError.getTotalErrorSamples

I have the following code (located at the end of the issue) to create, train and test a NN, but it fails in MultipleNeuronsOutputError here:

    for (OutputTargetTuple t : tuples) {
        if (!outputToTarget.get(t.outputPos).equals(t.targetPos)) {
        errorSamples++;
        }
    }

If I add outputToTarget.get(t.outputPos) != null && to the if statement it finishes successfully, but with zero samples and thus no error value.

I've checked to make sure the data is read in correctly and it seems to be fine. It trains just fine, the problem is that it fails on test.

Also switching to the GPU makes the training of a single epoch take forever. I've never actually seen it finish.

        Environment.getInstance().setExecutionMode(EXECUTION_MODE.SEQ);

        // create multi layer perceptron with one hidden layer and bias
        Environment.getInstance().setUseWeightsSharedMemory(false);
        Environment.getInstance().setUseDataSharedMemory(false);
        NeuralNetworkImpl mlp = NNFactory.mlpSigmoid(new int[]{40, 75, 75, 75, 10}, true);

        // create training and testing input providers
        FileReader reader;
        System.out.println("Try read data");
        List<float[][]> data = new ArrayList<float[][]>();
        try {
            reader = new FileReader("C:\\Users\\jself\\Data\\training_data.data2");
            data = GetDataFromFile(reader);
        } catch (FileNotFoundException e) {

        }
        System.out.println("Create input provider and trainer");
        SimpleInputProvider input = new SimpleInputProvider(data.get(0), data.get(1));
        // create backpropagation trainer for the network
        BackPropagationTrainer<?> bpt = TrainerFactory.backPropagation(mlp, input, input, new MultipleNeuronsOutputError(), new NNRandomInitializer(new MersenneTwisterRandomInitializer(-0.01f, 0.01f)), 0.1f, 0.7f, 0f, 0f, 0f, 1, 1, 1);

        // add logging
        bpt.addEventListener(new LogTrainingListener(Thread.currentThread().getStackTrace()[1].getMethodName()));

        // early stopping
        //bpt.addEventListener(new EarlyStoppingListener(testingInput, 10, 0.1f));

        System.out.println("Start training");
        // train
        bpt.train();

        System.out.println("Start testing");
        // test
        bpt.test();

Should all tests pass well?

I ran all tests from withing Eclipse, which found all classes with @test annotation and an them. Most of the passed, but two failed:

  1. com.github.neuralnetworks.samples.test.IrisTest.testAE()

Last assertion failed: java.lang.AssertionError: expected: 0.0 but was:0.9866667

  1. com.github.neuralnetworks.test.AETest.testAEBackpropagation()

Also last assertion failed: java.lang.AssertionError: expected:0.0 but was:0.1

Is this normal? Are these tests expected to run well?

Index out of range: 0 -> 89+0 to 0

Hi. First of all this is a great library that you have created.

I set up the Mnist test example and everything runs fine.

When I try to create my own with images as files and using the Mnist example as a staring point i try to convert the png file into a byte array and use the name of the file as a label but i get this as an error:

Exception in thread "main" java.lang.IllegalArgumentException: Index out of range: 0 -> 89+0 to 0
at com.github.neuralnetworks.util.Tensor.lambda$getIndex$14(Tensor.java:150)
at com.github.neuralnetworks.util.Tensor$$Lambda$25/1853205005.applyAsInt(Unknown Source)
at java.util.stream.IntPipeline$3$1.accept(Unknown Source)
at java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Unknown Source)
at java.util.Spliterator$OfInt.forEachRemaining(Unknown Source)
at java.util.stream.AbstractPipeline.copyInto(Unknown Source)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
at java.util.stream.AbstractPipeline.evaluate(Unknown Source)
at java.util.stream.IntPipeline.reduce(Unknown Source)
at java.util.stream.IntPipeline.sum(Unknown Source)
at com.github.neuralnetworks.util.Tensor.getIndex(Tensor.java:154)
at com.github.neuralnetworks.util.Tensor.set(Tensor.java:93)
at com.github.neuralnetworks.util.Matrix.set(Matrix.java:102)
at main.LetterTargetMultiNeuronOutputConverter.convert(LetterTargetMultiNeuronOutputConverter.java:23)
at main.LetterInputProvider.getNextUnmodifiedInput(LetterInputProvider.java:122)
at com.github.neuralnetworks.training.TrainingInputProviderImpl.getNextInput(TrainingInputProviderImpl.java:19)
at com.github.neuralnetworks.training.OneStepTrainer.train(OneStepTrainer.java:42)
at main.my_test1.my_cnn(my_test1.java:51)
at main.my_test1.main(my_test1.java:27)

can you please explain why this would happen. Im new to NN and am not sure how to use the Mnist example to make it work with png files instead of the Mnist IDX files.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.