Memory Error when solving with MLFMM
Hi,
I am trying to solve a sphere with MLFMM but I received the error below. I was able to solve the same model using the default solver (MoM) without getting any error.
'ERROR 32476: Not enough memory available for dynamic allocation'
I have attached a zip file which contains the input and output files. Any help would be much appreciated.
Thanks.
Answers
-
Hi Villcent
The MLFMM is for electrically large models. At the lower frequencies in your range, the model is sub-wavelengths in size and solves very ineffciently with MLFMM.
As a rule of thumb, you can use the MLFMM when the model is around 5 wavelengths in the largest dimension.
Mel
0 -
Hi Mel
Thank you for your previous reply.
I have some memory issues as well when solving electrically large models. I am currently working on a task to calculate the RCS of a sphere at a few THz using MLFMM.
I have attached the model that I was working on. When I tried to run the simulation on a single computer node, the simulation was terminated with the following error.
“ERROR 3617: FEKO process was terminated by kill”
From the error file of the job, I found the following message. However, in the feko .out file, the memory requirement is far less than the limit.
=>> PBS: job killed: mem 133560896kb exceeded limit 125829120kb
Therefore, I tried to run the simulation on two computer nodes instead. But this time, I received a different error.
“ERROR 36772: Not enough memory available for dynamic allocation”
Is there any way to overcome this problem? Is it not suitable to use MLFMM in this case?
I have attached two different zip files which contain the output files from these two simulations. Any help would be much appreciated.
Thanks.
PS: the simulations were run on a computer cluster with PBSPro batch system.
0 -
Hi Villcent
Some pointers:
- I noticed you are still using version 2018.0. Note that in version 2018.2 there have been improvements in memory savings (MPI3-SHM shared memory)
- You can also change the preconditioner to SPAI (Solver settings, Advanced tab in CADFEKO, Preconditioner).
- For RCS, you can relax the meshing. Try a Custom size of lam0/6.5.
Mel
0 -
Hi @mel
Is there any rule of thumb that how coarse the mesh size can be for RCS? while still getting reliable results.
PS: I'm planning to further increase the radius of the sphere but this would increase the electrical size of the model significantly.
Thank you.
0 -
Hi @Villcent,
In your model the ratio of highest frequency to lowest frequency is 500(!). This means that the mesh size is by factor 500 too small for the lowest frequency. The efficiency of the MLFMM suffers a lot from too small mesh elements. I would recommend to
- either divide the model in several models, so that the ratio is not bigger than 3. Here:
- 2 GHz- 6 GHz
- 6 GHz - 18 GHz
- 18 GHz - 54 GHz
- 54 GHz - 162 GHz
- 162 GHz - 486 GHz
- 486 GHz - 1 THz
- or (preferred) solve the model with MoM with HOBF (Higher Order Basis Functions)
- Also I would recommend using adaptive frequency sampling instead of linear spaced discrete frequencies. I'm sure this will lead to far less frequencies while having the same RCS result.
This is how I would try it:
0 - either divide the model in several models, so that the ratio is not bigger than 3. Here:
-
Hi @Torben Voigt,
Thank you for your reply.
However, I am facing the same issue as well with another model (3rd post in this topic). I am trying to calculate the RCS of a sphere with the radius of a few mm at a single THz frequency.
The model attached (in the 3rd post) is a sphere with 4mm radius incident by a source at 2.52THz. The suggestion given by Mel (in the 4th post), which is increasing the mesh size to lam0/6.5, works for that model.
Now I am planning to further increase the radius of the sphere to 6-10mm but this will cause the memory error again. Therefore, I am wondering if the mesh size can be further increased or there are other workarounds on this. I am concerned about the accuracy of the results when the meshing is too coarse.
Thank you.
0 -
Hi @Villcent,
The model Sphere_4mm_MLFMM.cfx has 4,860,384 mesh triangles, which means 7,290,576 unknowns in that case. Your simulation was terminated due to missing main memory. When you started the simulation 120.696 GByte were available. For an MLFMM simulation with 7,290,576 unknowns and 24 paralle cores you can expect the required memory to be around 250 - 350 GByte (with the given settings). What @mel wrote are basic steps to reduce the required memory (less mesh triangles, preconditioner SPAI, less parallel cores). In your case, I doubt this will be sufficient.
Regarding the mesh size I would agree that lambda/6.5 shouldn't be a problem for RCS calculation. But I wouldn't go much lower than that. At some point you will have to accept that your hardware isn't sufficient anymore. What users then do is switch to FEKO's asymptotic methos (RL-GO, PO (LE-PO), UTD). Please note that these are obviously not full-wave methods anymore and you will do some approximations. Both RL-GO and PO(LE-PO) won't compute currents in the shadow regions, UTD is not possible for RCS since both source and field point are in infinite distance.
To see if there is a significant deviation with asymptotic methods one would compare e.g. PO (LE-PO) with MLFMM where the hardware is still sufficient. If results agree sufficiently here, they will most likely agree even better at higher frequencies.
For now I would suggest to try and also . In the letter I changed the mesh size to lambda/6.5 and chose SPAI as preconditioner.
0