🎉Community Raffle - Win $25

An exclusive raffle opportunity for active members like you! Complete your profile, answer questions and get your first accepted badge to enter the raffle.
Join and Win

EDEM-Hyperstudy connector

User: "Matthias Vink"
Altair Community Member
Updated by Matthias Vink

Can the number of CPU's be specified as an input variable for hyperstudy when using the edem-hyperstudy connector, I want to use multi executing and am noticing that not all of my CPU capacity is used even when executing up to 10 runs at once, (in the setup I specified 3 cores to be used)

What settings do I need to change to ensure hyperstudy uses all availible resources?

regards,

Matthias

Find more posts tagged with

Sort by:
1 - 1 of 11
    User: "Stefan Pantaleev_21979"
    Altair Employee
    Updated by Stefan Pantaleev_21979

    Hi Matthias,

    Unfortunately it is not possible to parameterize the number of CPU threads for the EDEM solver through HyperStudy at the moment. However, you should be able to use all threads if you specify enough threads per run in combination with enough runs.  In most cases HyperStudy will start a new run of it can to keep the number if parallel runs to the specified number. An important exception is optimization studies where batches of jobs a run sequentially and the batch size is limited by the optimization algorithm settings. It is likely that this is what you are observing.

    If you are using GRSM method there are settings for the initial batch size and for all subsequent batches:

    image

    If you are using a genetic algorithm there is a setting for the population size per iteration:

    image

    Because optimization algorithms are sequential these settings limit the maximum number of parallel simulations that can be run even if a larger number is specified in the multi-execution setting. To use all your threads you can increase the values of the above settings for the optimization algorithm or increase the number of threads per run but keep in mind that the batch would need to complete in all cases before the next one is started so if some runs are faster than others you still might not have full utilization at times.

    If you would like complete utilization in all cases you can run a DoE first with a large number of points and then feed it to an optimization algorithm as an inclusion matrix in a second step or fit a model to it and operate on that using an optimization algorithm.

    Brest regards,

    Stefan