How to submit Feko job by using Standard Application Definition


How to submit Feko job by using Standard Application Definition.

============================================================================

 

Step 1: Login to Access Job submission web portal and Select Feko Application Icon.

Following image showing the same:

A screenshot of a computerDescription automatically generated

Following image showing the form loaded after selecting Application definition:

A screenshot of a computerDescription automatically generated

 

Step 2: Drag and drop input file at Input file(.cfx) form field.

 

Field Name

Description

Version

Feko application Version to run the job.

 

Model Decomposition Type / Optimization Type

Select Decomposition Type, Default is None, usually no decomposition for serial and SMP jobs. DDM for MPI jobs.

 

Select Parallelism

Explained below



What is input form field “Select Parallelism Type”?

 

 

 

Drag and Drop input file (usually with cfx extension) to run the job/workload,

Field Name

Description

Number of Cores

Select Total number of cores required for job.

 

Total Memory Required (in core) for Simulation (GB)

Total amount of memory required for the job.

Total Out of Core Memory Required (in GB)

Total scratch disk space required for the job.

Job Type         

Analysis (OR) Optimization mode of execution

Select CFX file to use

Primary input file for the job(CFX is the extension of the primary file)

Run Type

Throughput – If all number of cores in node is not allocated for this job, Remaining cores will be allocated for other jobs.
Performance - If all number of cores in node is not allocated for this job, Remaining cores will not be allocated for other jobs.

Output Directory

Directory name to copy back all job files. Default as per site specific configuration

 

 

Step 3: Click on submit button to submit the job.

 

 

Following are the form fields can be modified as per job requirements.

Following image showing the form loaded after selecting Application definition:

A screenshot of a computerDescription automatically generated

 

Following are the default visible Input form fields:

Field Name

Description

Default

Version

Select the version of the application to use.

Configurable / Usually latest version

Model Decomposition / Optimization Type

Select the model decomposition method / optimization type.

[NONE, LDM, DDM, MMO, FSO]

NONE

Select Parallelism Type

Select the parallelism Type:

1. SERIAL: Allows the job to be run with single core/CPU.

2. SMP: Allows the job to be run with CPUs available within one node in a cluster. Uses Threads.

3. MPI: Allows the job to be run with CPUs available across multiple nodes. Uses MPI Processes.

4. HYBRID: Allows the job to use a combination of MPI processes and threads. Generally faster and uses an optimal memory foot print.

SERIAL

Number of Cores

Select the total number of CPUs/cores to be used.

1

Total Memory Required (in core) for Simulation (GB)

Select the total memory required for the simulation in GB. This information is generally available in the simulation output files. Selecting a value closer to the real memory need will increase the chance of the simulation to complete successfully.

Note: This is NOT the same as memory per core.

User editable with reasonable default value based on the selected number of cores.

Total Out of Core Memory Required (in GB)

Select the total out core memory required (local scratch disk space) for the simulation in GB. This information is generally available in the simulation output files. Selecting a value closer to the real out core memory need will increase the chance of the simulation to complete successfully.

Note: This is NOT the same as memory per core.

User editable with reasonable default value based on the selected number of cores.

JOB_TYPE

Select Job Type (Analysis / Optimization)

Analysis

Select CFX file to use

Select the input file to be used with the simulation.

Note: Special characters and spaces are not allowed in file name.

 

Run Type

Selecting Performance mode will ensure that your job runs on an exclusive set of nodes and should expect your jobs to finish faster. On a busy cluster, this may lead to higher wait times.

 

Selecting Throughput mode will cause the scheduler to find the required resources even if they are scattered across several nodes in the cluster. This may increase the possibility of your job starting sooner than in the Performance mode. Your simulations may run longer than usual.

Default: Throughput

Output directory

Specify the directory where the result files will be stored after the job is complete.

User editable reasonable default : /stage/username/jobname_timestamp

 

 

Following image showing when we are Enabling show all option.

 

 

 

Apart from default Additional form fields are visible here, Following are the additional form fields other than default form fields:

 

Field Name

Description

Default

Queue

Select the queue to which the job must be submitted.

 

Job Name

Name of the job

Note: Special characters and spaces are not allowed in job name.

 

CFX file(Input file) name without extension

Number of threads per MPI

Will use the thread count selected per MPI process; A total of " + <above selected cores> + " CPU cores selected will not be exceeded. Use more threads to minimize Memory footprint.

User editable reasonable default: 1

Run In Place

Check this box if you want to run your job in the same directory where your input file is. This will prevent the expensive file copies. May impact performance of the simulation.

User selectable reasonable default: False

Include Files

Select the include files to be used with the simulation. Generally, the include files are auto added. But this is only an assist feature. Please ensure all the necessary include files are added and available before submitting the job.

---

Schedule After

Select this option to schedule your job to run at specific time in future. Note that the job becomes eligible to run after the selected time. Actual start times may differ based on the resources available on the cluster.

---

Kill Job After

Specify the estimated time that your job is expected to run on the cluster. Providing a good estimate increases the chances of your job starting sooner.

Note: The scheduler will kill the job if the actual wall time exceeds the estimated time specified.

 

---

Dependency type

Specify the Dependency Type:

afterok: start the current job only if the specified depend on job is successful.

aferany: start the current job after the specified depend on job irrespective of exit status.

Default: afterok

Dependent on Job

Specify the job id of a previously submitted job after which the current submission is expected to run.

---

Additional Options

Specify any valid command line options that you need to pass to the application. The options will be passed verbatim to the application.

---