🎉Community Raffle - Win $25

An exclusive raffle opportunity for active members like you! Complete your profile, answer questions and get your first accepted badge to enter the raffle.
Join and Win

Running Deep Learning extension with CUDA 10.2?

User: "jacobcybulski"
New Altair Community Member
Updated by Jocelyn
I can see in the new version of the Deep Learning extension the requirement for CUDA 10.0. However the new Tensorflow, which I also use on my system, requires CUDA 10.1+ and runs with the newest one too, which is CUDA 10.2. The release notes for the extension suggest to contact RM for assistance. As it is, the preferences for the GPU/CPU switch are complaining about my CUDA. I imagine I may need to set up a multi-CUDA system on my Ubuntu 18.04? Or is there some easy tweak to run the extension with the newer version of CUDA?
Sort by:
1 - 3 of 31
    User: "jczogalla"
    New Altair Community Member
    Accepted Answer

    we currently rely on CUDA 10.0, so a multi-CUDA setup might be a possibility.
    We are also currently working on the next version, which would rely on 10.2, but the release date is not clear yet.

    Cheers
    Jan
    User: "pschlunder"
    New Altair Community Member
    Accepted Answer
    Updated by pschlunder
    find a version build against CUDA 10.2 and cuDNN 7.6 here:

    (link is only valid until May 14th, if you need the extension and the link expired please point it out and we'll update).

    You can place the downloaded jar under your .RapidMiner/extensions folder. Once we'll release 0.9.4 it should be automatically used since it's a newer version.

    Another option would be to also install 10.0 and set the CUDA environment variable to the 10.0 version for the environment you're using RapidMiner in.

    Hope this helps,
    Philipp


    User: "jacobcybulski"
    New Altair Community Member
    OP
    Accepted Answer
    @jczogalla I have got a workaround! When you export the settings for LD_LIBRARY_PATH and a PATH to /usr/local/cuda within Rapid-Miner.sh, miraculously it is then possible to switch from CPU to GPU and Deep Learning operators actually execute on a GPU!
    I have tried to set these environment variables in /etc/profile and /etc/environment but it did not matter. Perhaps there is some global setting for JVM?