Giter Club home page Giter Club logo

Comments (8)

diviyank avatar diviyank commented on May 14, 2024 1

Hi,
Thanks for the feedback. This behavior is certainly strange. Are you using a computing cluster ? Are you changing the $CUDA_VISIBLE_DEVICES environment variable ?

We will investigate this issue. For the time being, you can always manually override the SETTINGS variable to change the settings:

import cdt
cdt.SETTINGS.GPU=1

We will get back to you if we have more details on the automatic detection issues.
Best,
Diviyan

from causaldiscoverytoolbox.

ArnoVel avatar ArnoVel commented on May 14, 2024

Hello again,

I do not use a cluster, only Windows10 and CUDA 10.0 (I believe) for a single GPU.
Every single python package was installed using pip (which defaults to pip for python3.6 on windows).

By the way, what is the expected speed of training for GNN? I find it considerably slow compared to Computer Vision GANs, although increasing the sample size doesn't affect the speed much.

Thanks for all the work,
Arno V.

from causaldiscoverytoolbox.

diviyank avatar diviyank commented on May 14, 2024

It seems that the GPUtil package is not fully compatible with Windows. Is the behaviour correct if you set manually the cdt.SETTINGS.GPU variable ?

The GNN and CGNN to a bigger extent, are quite slow: they are retraining a neural network for each new configuration, and 8 times for averaging of results (nruns parameter). The criterion used is MMD (squared complexity w.r.t the batch size) . However, the training time of one configuration should be quite fast, as the MMD and neural architecture are optimized (50+ it/s for data of 20 variables and 500 points).

Which version of the CDT are you using ? There was one version that had unoptimized torch.data.dataloader objects, which crippled the performance.

Best,
Diviyan

from causaldiscoverytoolbox.

ArnoVel avatar ArnoVel commented on May 14, 2024

I'm using CDT 0.5.0 . I am not yet able to test the cdt.SETTINGS.GPU=1 behavior, but will in the near future!
So far, after my tweaks (adding a layer + an additional mapping to augment the MMD kernel) , the Dataloader takes 30% of the time of each epoch.
I've thought about just using torch randperms , but the performance was the same.
However, I believe MMD is not dependent on the ordering of the sample.
As such, never shuffling the sample might be an ok move.

PS: I have a partial understanding of what p-hacking is, and inside of GNN, the TTestCriterion.loop(AB,BA) seems like p-hacking to me (repeating experiments while significance is not obtained). I could obviously be wrong though.
PPS: I sent you an email (LRI address) asking for some ideas. Feel free to reply however you like !

from causaldiscoverytoolbox.

ArnoVel avatar ArnoVel commented on May 14, 2024

To add an extra time when this behavior was encountered: after experiments using CUDA from cdt, I decided to shut down all CDT+CUDA related kernels in Jupyter.

I then went on to run a python script involving CDT and CUDA.
The script could not detect cuda, however setting SETTINGS.GPU=1 fixed it.
Without this line, I could not make it work with CUDA, and had to reboot the PC.

from causaldiscoverytoolbox.

diviyank avatar diviyank commented on May 14, 2024

I do understand your concern on the TTestCriterion; however we are not selecting the samples to be significant, we are only adding more runs. I do understand the confusion however, we will consider removing it.
I think you will have to add cdt.SETTINGS.GPU=1 on every script, because of the incompatibility of GPUtils with Windows...
I will check the (C)GNN's performance. What size of data are you using and on how many epochs per second is the model running ?
Could you give me your hardware configuration ?
Best,
Diviyan

from causaldiscoverytoolbox.

ArnoVel avatar ArnoVel commented on May 14, 2024

I will only be able to dive into the specifics of my lab's computer around early september πŸ˜…
For the data size, what I usually do is (because of memory limits) not allow more than a 1000 samples for each pair.
Some pairs do reach 1000, and I generally put the batch size as either all or half the dataset. The batch size then depends on how many data points there are for a given pair.

from causaldiscoverytoolbox.

diviyank avatar diviyank commented on May 14, 2024

We removed the TTest criterion for more consistency in the code. The issue comes from the library GPUtil, not really compatible with CDT. When using Windows, please set the GPU number manually:

cdt.SETTINGS.GPU=1

I will be closing this issue; don't hesitate to reopen it if the workaround doesn't work.
Best,
Diviyan

from causaldiscoverytoolbox.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.