Hello,
I have begun to create some "larger" models (>300 000 solid elems) for Radioss and am trying to figure out the best/fastest/most efficient method of decreasing the time it takes for the solver to run.
I found the PDF from the Radioss User Guide in the question Problems with Radioss in HPC Cluster to be useful in explaining scenarios for "-nt" and "-np" and while this approach has worked fine for smaller models (<~50 000 solid elems), I have heard that taking advantage of the GPU can help reduce the run time of the solver (see the PDF detailing Radioss speedup using NVIDIA GPU posted to the question Solver Options). However, I personally find the documentation on accessing GPU/specifying GPU usage by Radioss non-existent/sparse/confusing...
From the question Running RADIOSS with GPU from ~5 years ago (which also links to the aforementioned PDF), it could perhaps be specified by using the command "-gpu", though only for certain graphics cards (and maybe only on Linux??). Then, from [Help] Radioss with Nvidia CUDA 10 from ~3 years ago, it's said that GPU calculation is not available/no longer supported and users should instead use "-nt" and "-np" commands.
Is using GPU still unavailable to use for speeding up Radioss? If not, is using the commands "-nt" and "-np" to access a greater % of CPU really the only way to help reduce solver run time? If not, what would be the other ways to do so?
Any additional information or advice on how to best reduce solve time, based on my own system setup, would be much appreciated.
Thank you in advance for your time.
For reference, these are system details:
> Windows
> CPU: AMD EYC 7351 16-core processor, 2 sockets & 32 cores giving 64 logical processors
> GPU: NVIDIA Quadro P2000
> RAM: 128 GB