Giter Club home page Giter Club logo

Comments (13)

SheldonCurtiss avatar SheldonCurtiss commented on August 23, 2024

I've checked out https://github.com/uber-research/deep-neuroevolution
but found it impossible to get running.

I'm also currently investigating https://github.com/Xilinx/DarwiNN
but it's fully in python so I expect this would likely be faster.

from neat_c-gpu.

sebbeutler avatar sebbeutler commented on August 23, 2024

@SheldonCurtiss This version of neuroevolution is currently in developpement. It is based on this paper: http://nn.cs.utexas.edu/downloads/papers/stanley.cec02.pdf

I have already implemented it in Python and C# but very basic version of it and with pretty bad performances.
So I'm re implementing it in C and a second version on GPU (this repo) with better optimizations and full options.
As of right now, I'm nowhere near completion, there is still a lot to do.

If you want, checkout SharpNeat which is a C# implementation that runs pretty well.
There is also neat-python, but I never tried this one.

What are you looking for in neuroevolution of neural networks ?
Use it in a project? Implement one?

There are other implementation of the NEAT paper on gpu if you want more resources.

from neat_c-gpu.

SheldonCurtiss avatar SheldonCurtiss commented on August 23, 2024

Do you plan to share the python portion?
Currently am using this https://github.com/ju-leon/NEATer
Which doesn't have GPU usage but is quite decent.

'There are other implementation of the NEAT paper on gpu if you want more resources.'
The two I linked are the only ones I'm aware of, I'd love suggestions because I've dug a ton.
Also it really doesn't make sense to me why you'd be making your own GPU implementation if there's already existing ones that are well supported.

from neat_c-gpu.

SheldonCurtiss avatar SheldonCurtiss commented on August 23, 2024

Like unless I'm mistaken this is the only repo which even claims to have a functioning NEAT utilizing GPU. I saw a few others that gave up on their projects.
I'm at the point where I've begun investigating reading through the cuda documentation because as you know neuroevolution speed is rough.

I'd kill for literally anything as a starting point given I've not done anything similar to this.
I plan to attempt to run your project but based on my very limited understanding I'm pretty sure without the python component I won't be able to do much.

The blog post on the uber one claimed around 4x+ performance increase using cuda, since it also free'd up cpu. Do you have any rough estimates as to how fast you'd guess your implementation is?

"What are you looking for in neuroevolution of neural networks ?
Use it in a project? Implement one?"

Trying to use it in a project, I've exhausted many approaches and neuroevolution seems to be promising thus far. That being said without significant speed increases I don't think any large problems are feasible.
That being said I'm investigating pretty much every possible approach, I.e https://github.com/deepmind/dnc

from neat_c-gpu.

sebbeutler avatar sebbeutler commented on August 23, 2024

You should find pretty much all the good implementation of it on this website : http://eplex.cs.ucf.edu/neat_software/

Also it really doesn't make sense to me why you'd be making your own GPU implementation if there's already existing ones that are well supported.

Actually I don't know I've never tried one, I have used TensorFlow-NEAT but it is not that efficient because most of it is still python.

Also I will implement python bindings once everything is running correctly.
The first goal is to reproduce the NEAT algorithm at first and then transition into an HyperNEAT version which is more efficient and can be scaled to any amount of data (close to CNN) where the default NEAT algorithm takes a lot of time to process large number of inputs.

I can't estimate how fast it will be but with a good GPU I think you could easily get more speed than a CPU version of it.

That said there is a lot to do to make it work rn so it will take some time.

If you want to share details about your project I could tell you if you can apply a neuroevolution approch on it or if there is a better way.

from neat_c-gpu.

sebbeutler avatar sebbeutler commented on August 23, 2024

NEATer is not that great, it is missing some optimization strategies.

from neat_c-gpu.

SheldonCurtiss avatar SheldonCurtiss commented on August 23, 2024

I'm gonna be honest I'm not too familiar with the optimizers
On my todo-list I have: Research OpenAIESOptimizer, SNESOptimizer, GAOptimizer

How significant are these optimizers would you say as a %?

Also I'm not super familiar with all the different versions.

It appears Neater has the capabilities ES-HyperNEAT does in terms of 'can determine the proper density and position of hidden neurons entirely on its own'

As for hyperNEAT I haven't read anything telling me the speed differences or the real world advantages of it as opposed to NEAT.
"HyperNEAT outperforms NEAT on a simple
instance of the problem but that its advantage disappears when the problem is complicated"
was the findings of one research paper.

This list doesn't have any CUDA capable frameworks, so I still think NEATer is currently the "fastest implementation with a python interface layer that hasn't been abandoned years ago"

image
So it's hard to tell exactly and I have 0 idea the speed cost for the OpenAIESOptimizer but somewhere around 1-4x?
🤔

"I can't estimate how fast it will be but with a good GPU I think you could easily get more speed than a CPU version of it."
I assume that means it doesn't function at all? Fug. Uber seemed like they were doing something interesting for their utilization and claimed 2-4x Which I can totally believe.

from neat_c-gpu.

sebbeutler avatar sebbeutler commented on August 23, 2024

On my todo-list I have: Research OpenAIESOptimizer, SNESOptimizer, GAOptimizer

Sorry but I don't know much about those optimizer, I'll take a look at them.

As for hyperNEAT I haven't read anything telling me the speed differences or the real world advantages of it as opposed to NEAT.
"HyperNEAT outperforms NEAT on a simple
instance of the problem but that its advantage disappears when the problem is complicated"
was the findings of one research paper.

HyperNEAT is more suited than NEAT for problems with a large amount of inputs/outputs and for variable input sizes for example. Also it will perform better on task that require a geometric representation.

There is MultiNEAT that is written in C++ and has Python bindings.

There is also PyTorch NEAT that should run on GPU but I think it's only the neural network execution that is run on gpu and the evolution is still python.

I assume that means it doesn't function at all?

No, for now I'm finishing the C version (neat_c) and then I will implement the GPU part and finally the python bindings.

On the Uber implementation, it seems to be running TensorFlow to evaluate the NNs in parallel and the rest is still python, but I can't really tell as im not really familiar with their optimizer yet. But I'll do a benchmark comparaison when the time will come.

from neat_c-gpu.

SheldonCurtiss avatar SheldonCurtiss commented on August 23, 2024

Hm
MultiNEAT looks ancient.

PyTorch NEAT seems to of been created by uber? Unless you're talking about some literal features in pytorch???
After seeing that other project (by uber) I think I'd rather not touch anything with ubers name on it lmao.

sigh I just want speed to train my ai.

A problem I've run into with both my Neuroevolution projects is 'inability to gain traction' so like lets say training many generations and then randomly swapping my gym environment to another configuration all Species end up dying and it's never progressively getting better.
While I could train on a non-randomized environment I was hoping to train on smaller portions of my data in a randomized way.
Is there a term for this?

from neat_c-gpu.

SheldonCurtiss avatar SheldonCurtiss commented on August 23, 2024

What would you say is the fastest implementation currently with NEATer I'm single core bottlenecked on Core 0.

from neat_c-gpu.

sebbeutler avatar sebbeutler commented on August 23, 2024

While I could train on a non-randomized environment I was hoping to train on smaller portions of my data in a randomized way.
Is there a term for this?

Yes, its called batching. You need to be careful of the quality of the data you are using and it needs to match your fitness function.

What would you say is the fastest implementation currently with NEATer I'm single core bottlenecked on Core 0.

If you are planning to train your ai on cpu single core of course it will take time.
I can't tell you which one is the fastest, I've only used SharpNEAT which is pretty fast and other implementations that were not very performant and that's why I'm trying to make one faster.

from neat_c-gpu.

SheldonCurtiss avatar SheldonCurtiss commented on August 23, 2024

I swapped to multi-neat.
Seems significantly better, it uses all my cores fully.

I have my environment setup so I can try out all 3 (NEAT, HyperNEAT, and ES-HyperNeat)

I'm still struggling with 'Traction' or as you said batching.
I'm also confused by HyperNEAT, I think I understand most of it but seem confused as to how to increase the input neurons into the Hyper?

Zzz I'm trying though. Multi-Neat deff seems better, that being said there's a billion params and my problem is complex.

from neat_c-gpu.

sebbeutler avatar sebbeutler commented on August 23, 2024

You should find here all the informations you need about HyperNEAT.

As for the batching, you need to apply transformation to your training set.
For example, if you have a dataset of 1000 elements, I will split it into batches of 200 elements chosen randomly and train my agents on the 5 batches created, you can also increase the size of you dataset by applying transformation on the data, for example if your data set is composed of images, you take existing images, add some noises or blur and then add it to the dataset, basically the bigger the data set, the better the agent will adapt to new situation but with a tiny dataset your training is much faster. If the data set is too big, the neuralnetwork might take forever to train.
You can also add strategies like if your agent get the correct output for ~90% of the dataset then for 10 generations, create a batch of the ~10% of data where the agent doesn't have a good score and train on it for multiple generations to increase the agent strength (for example).

from neat_c-gpu.

Related Issues (1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.