How to submit EDEM job by using Standard Application Definition
How to submit EDEM job by using Standard Application Definition
======================================================
Step 1: Login to Access Job submission web portal and Select EDEM Application Icon.
Following image showing the same:
Following image showing the form loaded after selecting Application definition:
Step 2: Drag and drop input file at Input file (.dem) form field, and provide needed input files in zip at Include files form field.
Field Name | Description |
Version | EDEM application Version to run the job.
|
Job Name | Job Name / Default is input filename excluding extension |
Select Parallelism | Explained below |
What is input form field “Select Parallelism Type”?
- SERIAL – To run the job using single cpu core. Ideal for one node one core job.
- SMP – To run the job using more than one cpu core and less than or equal to the cpu cores in one compute node. Ideal for SMP supported application to run in single node.
- MPI – To run job using cpu cores from more than one node using MPI communication. Ideal while running multi node job.
- Hybrid – To run job using the combination of SMP and MPI mode. Total cores = (number of MPI will be equal to the number of nodes) * (number of cores (usually all cores) from each node)
Drag and Drop input file(.dem) to run the job/workload, and zip file having required input files.
Field Name | Description |
Number of Cores | Select Total number of cores required for job.
|
Total Memory Required (in core) for Simulation (GB) | Total amount of memory required for the job. |
Select the input file to be used with the simulation | Primary input file for the job |
Run Type | Throughput – If all number of cores in node is not allocated for this job, Remaining cores will be allocated for other jobs. |
Output Directory | Directory name to copy back all job files. Default as per site specific configuration |
Step 3: Click on submit button to submit the job.
All form fields more details are shown below.
Following image showing the form loaded after selecting Application definition:
Following are the default visible Input form fields:
Field Name | Description | Default |
Version | Select the version of the application to use. | Configurable / Usually latest version |
Job Name | Name of the job Note: Special characters and spaces are not allowed in job name.
| Input file name without extension |
Select Parallelism Type | Select the parallelism Type: 1. SERIAL: Allows the job to be run with single core/CPU. 2. SMP: Allows the job to be run with CPUs available within one node in a cluster. Uses Threads. 3. MPI: Allows the job to be run with CPUs available across multiple nodes. Uses MPI Processes. 4. HYBRID: Allows the job to use a combination of MPI processes and threads. Generally faster and uses an optimal memory foot print. | SERIAL |
Number of Cores | Select the total number of CPUs/cores to be used. | 1 |
Total Memory Required (in core) for Simulation (GB) | Select the total memory required for the simulation in GB. This information is generally available in the simulation output files. Selecting a value closer to the real memory need will increase the chance of the simulation to complete successfully. Note: This is NOT the same as memory per core. | User editable with reasonable default value based on the selected number of cores. |
Input file | Select the input file(.dem) to be used with the simulation Note: Special characters and spaces are not allowed in file name. |
|
Run Type | Selecting Performance mode will ensure that your job runs on an exclusive set of nodes and should expect your jobs to finish faster. On a busy cluster, this may lead to higher wait times.
Selecting Throughput mode will cause the scheduler to find the required resources even if they are scattered across several nodes in the cluster. This may increase the possibility of your job starting sooner than in the Performance mode. Your simulations may run longer than usual. | Default: Throughput |
Output directory | Specify the directory where the result files will be stored after the job is complete. | User editable reasonable default : /stage/username/jobname_timestamp |
Following image showing when we are Enabling show all option:
Apart from default Additional form fields are visible here, Following are the additional form fields other than default form fields:
Field Name | Description | Default |
Queue | Select the queue to which the job must be submitted. |
|
Number of GPUS | Select the total number of GPUS to be used | 0 |
Total Out of Core Memory Required (in GB) | Select the total out core memory required (local scratch disk space) for the simulation in GB. This information is generally available in the simulation output files. Selecting a value closer to the real out core memory need will increase the chance of the simulation to complete successfully. Note: This is NOT the same as memory per core. | User editable with reasonable default value based on the selected number of cores. |
Use Single Precision | Use executable compiled with Single Precision. Generally useful for larger models. Expect the job to be slower when compared with the option disabled. | False |
Include Files | Select the include files to be used with the simulation. Generally, the include files are auto added. But this is only an assist feature. Please ensure all the necessary include files are added and available before submitting the job. | --- |
Run In Place | Check this box if you want to run your job in the same directory where your input file is. This will prevent the expensive file copies. May impact performance of the simulation. | User selectable reasonable default: False |
Write Out | Choose the Write out interval | 1e-05 |
Time Step | Select the time step. Set to 0 to enable auto Time Step | 1.0e-06 |
Runtime | Select the total run time to run a job. | 1 |
Cell Size | Select the Grid cell size (R min) | 2 |
Enable Dynamic | Select 0 if you don’t want to enable Dynamic Domain. Select 1 if you want to enable Dynamic Domain. | 0 |
Dynamic Domain Check Interval | Set the dynamic domain check interval | 0.1 s |
Dynamic Domain Practice Displacement | Set the dynamic domain particle displacement | 10% of R min |
Schedule After | Select this option to schedule your job to run at specific time in future. Note that the job becomes eligible to run after the selected time. Actual start times may differ based on the resources available on the cluster. | --- |
Kill Job After | Specify the estimated time that your job is expected to run on the cluster. Providing a good estimate increases the chances of your job starting sooner. Note: The scheduler will kill the job if the actual wall time exceeds the estimated time specified.
| --- |
Dependency type | Specify the Dependency Type: afterok: start the current job only if the specified depend job is successful. aferany: start the current job after the specified depend job irrespective of exit status. | Default: afterok |
Dependent on Job | Specify the job id of a previously submitted job after which the current submission is expected to run. | --- |
Additional Options | Specify any valid command line options that you need to pass to the application. The options will be passed verbatim to the application. | --- |