Hi EDEM team and everyone,
I have just realized that the way I select the time step for a simulation has a strong effect on calculation speed.
The case I tested is using the CUDA-based GPU solver with 10 million particles on an RTX A6000 GPU. It is shown that the calculation speed is at least 4 times higher when I use the fixed time step value (I fixed 20%) instead of using "auto time step". And what the GPU Cuda utilization is shown in Task Manager is 60% for the case I used "auto time step", and almost 99% when the fixed time step is used.
I hope this will help 