Does ultraFluidX use nvlink?

Scott Mosier
Scott Mosier New Altair Community Member
edited May 2022 in Community Q&A

I'm new to ultraFluidX, and am running it with two identical GPUs, connected via nvlink, but nvidia-smi shows no traffic over the nvlink, the PCIe traffic looks high, and the GPUs aren't maxed out.  This suggests to me that PCIe bandwidth is the limiting factor and that nvlink isn't being utilized.  Is this expected or am I not launching my simulation correctly?

Thanks!

 

~$ nvidia-smi nvlink -gt d GPU 0: NVIDIA RTX A6000 (UUID: GPU-cbe0f45d-c539-3abb-8877-545adbd4bf68)          Link 0: Data Tx: 0 KiB          Link 0: Data Rx: 0 KiB          Link 1: Data Tx: 0 KiB          Link 1: Data Rx: 0 KiB          Link 2: Data Tx: 0 KiB          Link 2: Data Rx: 0 KiB          Link 3: Data Tx: 0 KiB          Link 3: Data Rx: 0 KiB GPU 1: NVIDIA RTX A6000 (UUID: GPU-6af0c1c8-d55d-6d8d-15ac-7cb0cd18be50)          Link 0: Data Tx: 0 KiB          Link 0: Data Rx: 0 KiB          Link 1: Data Tx: 0 KiB          Link 1: Data Rx: 0 KiB          Link 2: Data Tx: 0 KiB          Link 2: Data Rx: 0 KiB          Link 3: Data Tx: 0 KiB          Link 3: Data Rx: 0 KiB

Answers

  • Scott Mosier
    Scott Mosier New Altair Community Member
    edited May 2022

    First, the context here is that I'm using Windows and running ultraFluidX in the Windows Subsystem for Linux.  It's been working fine this way so far.  But I discovered that if I configure the Windows driver to "enable SLI", I start to see nvlink traffic and the simulation starts running with maxed-out GPUs and much lower PCIe traffic.  But the simulation regularly fails after ~45 iterations.  If I go back to "SLI disabled" in the nvidia driver, then the exact same simulation works fine (but apparently PCIe-limited).