Comments (6)
hmm... sounds like you could be running the exact same as my main development machine, which is a little distressing. Do the examples from the CUDA SDK (from nvidia) work? What about the following in ghci?
ghci> :m +Foreign.CUDA.Driver
ghci> initialise []
ghci> props =<< device 0
from accelerate.
also, which version of ghc?
from accelerate.
Good to know that accelerate at least "should" work on my platform! I
am running ghc-7.0.3.
I am able to run the example programs from the CUDA SDK (in
/Developer/GPU Computing/C/bin/darwin/release) without any major
problems. Some of the programs give an out-of-memory error, but this
is fixable by turning down my screen resolution.
ghci> :m +Foreign.CUDA.Driver
ghci> initialise []
ghci> props =<< device 0
Here's my output from those commands:
GHCi, version 7.0.3: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
Prelude> :m +Foreign.CUDA.Driver
Prelude Foreign.CUDA.Driver> initialise []
Loading package extensible-exceptions-0.1.1.2 ... linking ... done.
Loading package bytestring-0.9.1.10 ... linking ... done.
Loading package cuda-0.3.2.2 ... linking ... done.
Prelude Foreign.CUDA.Driver> props =<< device 0
DeviceProperties {deviceName = "GeForce GT 120", computeCapability =
1.1, totalGlobalMem = 268107776, totalConstMem = 65536,
sharedMemPerBlock = 16384, regsPerBlock = 8192, warpSize = 32,
maxThreadsPerBlock = 512, maxBlockSize = (512,512,64), maxGridSize =
(65535,65535,1), maxTextureDim1D = 8192, maxTextureDim2D =
(65536,32768), maxTextureDim3D = (2048,2048,2048), clockRate =
1250000, multiProcessorCount = 4, memPitch = 2147483647,
textureAlignment = 256, computeMode = Default, deviceOverlap = True,
concurrentKernels = False, eccEnabled = False,
kernelExecTimeoutEnabled = True, integrated = False, canMapHostMemory
= True}
This seems to match what I get from running the SDK's deviceQuery program.
Any more ideas?
Thanks,
-Chris
Reply to this email directly or view it on GitHub:
https://github.com/mchakravarty/accelerate/issues/26#issuecomment-1778496
from accelerate.
Sorry for the late reply Chris, I should have mentioned earlier that I'm currently away and thus not working on Accelerate right now.
At any rate it seems that the haskell/cuda binding level is initialising the hardware okay, but that the reference to the execution context is being lost somewhere in accelerate. If you want to jump into the code, I'm happy to point you in the right directions, but otherwise it sounds like it will be difficult to diagnose the problem. I'll have to add some sort of verbose logging mechanism to accelerate he help pin down bug reports. hmm...
from accelerate.
I was able to fix the problem by upgrading to the lastest accelerate (0.9.0.0) from github, using the cuda-0.4 drivers and toolkit, and (probably most importantly) restarting my machine.
from accelerate.
Thanks for letting us know that this works now. It did seem strange given the similarity of our machines.
from accelerate.
Related Issues (20)
- [BUG] Build fails HOT 6
- [BUG] Imperfect dead code elimination
- [BUG] Unexpectedly long phases when training a neural network HOT 1
- Support CUDA 11 HOT 1
- [BUG] CUDA-10 library doesn't support the Turing-based RTX 2060? HOT 8
- `inconsistent valuation @ shared 'Acc'` when trying to lift non-`Acc` function to `Acc` HOT 6
- `Foreign` instance for reference interpreter
- Is there a way to force accelerate operations to be sequentially evaluated? HOT 10
- [BUG] doc bugs
- Could not enable debugging options HOT 5
- Support GHCJS compilation HOT 7
- [BUG] Function hashes have incorrect length causing internal errors HOT 2
- [BUG] undefined symbol: _ZTIN4llvm10CallbackVHE HOT 4
- [BUG] Value 'sm_30' is not defined for option 'gpu-name' HOT 4
- [BUG] typo in Semigroup instance of Exp (Maybe a) HOT 1
- How to realise convolution? HOT 13
- [Tracking Issue] Implementing (Segmented) Single-Pass Look-Back Scans
- [BUG] Internal error in package accelerate and LLVM.PTX backend: CUDA Exception - misaligned address HOT 1
- [BUG] Runtime error with llvm-ptx backend: double free or corruption (!prev)
- [BUG] Library won't compile with debug flag when referenced by another project's cabal.project file. HOT 9
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from accelerate.