Feko 2026.0 includes Intel MPI library version 2021.16. This updated Intel MPI version resolves several previously reported issues that, in earlier releases, required additional user configuration.
After upgrading to Feko 2026.0 (or any version using Intel MPI 2021.16 or newer), no manual configuration is required.
Users who previously applied the workaround described below should remove the I_MPI_FABRICS environment variable from their setup.
Known Issues with Feko 2022.0–2025.1 (IMPI 2021.2–2021.12)
Feko 2022.0 through Feko 2025.1 used Intel MPI (IMPI) library versions 2021.2–2021.12. These IMPI versions introduced an advanced mechanism for automatically detecting available hardware and communication fabrics to optimize performance for each specific system configuration. Several system-specific issues were observed when using these IMPI versions.
The observed behaviour and recommended solutions are summarized below:
- Feko fails to start — MPI library errors referencing UCX and/or fabrics are displayed.
- Cause: Outdated fabric-related drivers.
- Solution: Update InfiniBand or other fabric drivers to the latest versions, as the affected IMPI releases rely on newer driver mechanisms.
- Feko hangs during execution — typically during LU factorization, though it may occur at other computation stages.
- Observed on: Linux clusters with InfiniBand.
- Workaround: Set the following environment variable: I_MPI_FABRICS=ofi
- Feko runs out of memory — during the “Backward substitution for FEM coupling matrix” phase in multi-frequency simulations.
- Observed on: Both Windows and Linux, depending on hardware configuration.
- Workaround: Same as above — set: I_MPI_FABRICS=ofi
Important:
Do not set I_MPI_FABRICS=ofi globally or by default, as this may reduce the performance of parallel Feko simulations. When using Intel MPI 2021.16 or newer, do not set I_MPI_FABRICS at all — it is no longer required.