How to submit AcuSolve job by using Standard Application Definition
How to submit AcuSolve job by using Standard Application Definition.
===========================================================================
- Step -1:Login to Access Job submission web portal and Select AcuSolve Application Icon.
- Following image showing the same:
- Following image showing the form loaded after selecting Application definition:
- Step -2:Drag and drop input file at Input file(.inp or .zip) form field and mesh.zip(Which contains mesh.dir) in Include files form field.
Field Name | Description |
Version | AcuSolve application Version to run the job.
|
Job Name | Job Name / Default is input filename excluding extension |
Select Parallelism | Explained below |
What is input form field “Select Parallelism Type”?
- SERIAL – To run the job using single cpu core. Ideal for one node one core job.
- SMP – To run the job using more than one cpu core and less than or equal to the cpu cores in one compute node. Ideal for SMP supported application to run in single node.
- MPI – To run job using cpu cores from more than one node using MPI communication. Ideal while running multi node job.
- Drag and Drop inp(input) file to run the job/workload,
Field Name | Description |
Number of Cores | Select Total number of cores required for job.
|
Total Memory Required (in core) for Simulation (GB) | Total amount of memory required for the job. |
Total Out of Core Memory Required (in GB) | Total scratch disk space required for the job. |
Input file (.inp or .zip) | Primary input file for the job |
Run Type | Performance - If all number of cores in node is not allocated for this job, Remaining cores will not be allocated for other jobs. Throughput – If all number of cores in node is not allocated for this job, Remaining cores will be allocated for other jobs. |
Output Directory | Directory name to copy back all job files. Default as per site specific configuration |
- Step -3:Click on submit button to submit the job.
Following are the form fields can be modified as per job requirements.
Following image showing the form loaded after selecting Application definition:
- Following are the default visible Input form fields:
Field Name | Description | Default |
Queue | Select the queue to which the job must be submitted. |
|
Version | Select the version of the application to use. | Configurable / Usually latest version |
Job Name | Name of the job Note: Special characters and spaces are not allowed in job name.
| Input file name without extension |
Select Parallelism Type | Select the parallelism Type: 1. SERIAL: Allows the job to be run with single core/CPU. 2. SMP: Allows the job to be run with CPUs available within one node in a cluster. Uses Threads. 3. MPI: Allows the job to be run with CPUs available across multiple nodes. Uses MPI Processes. | SERIAL |
Number of Cores | Select the total number of CPUs/cores to be used. | 1 |
Total Memory Required (in core) for Simulation (GB) | Select the total memory required for the simulation in GB. This information is generally available in the simulation output files. Selecting a value closer to the real memory need will increase the chance of the simulation to complete successfully. Note: This is NOT the same as memory per core. | User editable with reasonable default value based on the selected number of cores. |
Total Out of Core Memory Required (in GB) | Select the total out core memory required (local scratch disk space) for the simulation in GB. This information is generally available in the simulation output files. Selecting a value closer to the real out core memory need will increase the chance of the simulation to complete successfully. Note: This is NOT the same as memory per core. | User editable with reasonable default value based on the selected number of cores. |
Input file (.inp or .zip) | Select the input file to be used with the simulation where you can select the files in the form of .inp or .zip(which should contain the .inp file) Note: Special characters and spaces are not allowed in file name. |
|
Run Type | Selecting Performance mode will ensure that your job runs on an exclusive set of nodes and should expect your jobs to finish faster. On a busy cluster, this may lead to higher wait times.
Selecting Throughput mode will cause the scheduler to find the required resources even if they are scattered across several nodes in the cluster. This may increase the possibility of your job starting sooner than in the Performance mode. Your simulations may run longer than usual. | Default: Performance |
Output directory | Specify the directory where the result files will be stored after the job is complete. | User editable reasonable default : /stage/username/jobname_timestamp |
- Following image showing when we are Enabling show all parameters option:
Apart from default Additional form fields are visible here, Following are the additional form fields other than default form fields:
Field Name | Description | Default |
Queue | Select the queue to which the job must be submitted. |
|
Number of threads per MPI | Will use the thread count selected per MPI process; A total of " + <above selected cores> + " CPU cores selected will not be exceeded. Use more threads to minimize Memory footprint. | User editable reasonable default : 1 |
Run In Place | Check this box if you want to run your job in the same directory where your input file is. This will prevent the expensive file copies. May impact performance of the simulation. | User selectable reasonable default: False |
Include Files | Select the include files to be used with the simulation. Generally, the include files are auto added. But this is only an assist feature. Please ensure all the necessary include files are added and available before submitting the job. | --- |
Schedule After | Select this option to schedule your job to run at specific time in future. Note that the job becomes eligible to run after the selected time. Actual start times may differ based on the resources available on the cluster. | --- |
Kill Job After | Specify the estimated time that your job is expected to run on the cluster. Providing a good estimate increases the chances of your job starting sooner. Note: The scheduler will kill the job if the actual wall time exceeds the estimated time specified.
| --- |
Dependency type | Specify the Dependency Type: afterok: start the current job only if the specified depend job is successful. aferany: start the current job after the specified depend job irrespective of exit status. | Default: afterok |
Depend on Job | Specify the job id of a previously submitted job after which the current submission is expected to run. | --- |
Additional Options | Specify any valid command line options that you need to pass to the application. The options will be passed verbatim to the application. | --- |