I'm having troubles with a refine mesh size of 0.1mm
Hello! i was doing a refine process of the mesh in another more complex model than I show below, to get the stress results convergence with mesh independence. When I ran the solver Optistruct, it was stuck more than two hours in 0.00%, so I aborted the solver.
I tried to found out what was happening, so I did a simpler model like a cantilever beam with the same situation, a refinement box with a 0.1mm mesh size, and it happened the same.
I show in the image below the cantilever beam with a 0.1mm refinement box:
What's happening? I hope somebody can help me, thanks!
Best Answer
-
Álvaro Cuadrado_21918 said:
Thanks for the book reference! I'll take into account.
I have refined the mesh up to 0.1mm because the convergence error between 0.8mm and 0.4mm was 7% between the stress results in another more complex model, but i'll try with a little big refined mesh to compare the results or maybe use a cluster.
Did you mean that I have to take into account the "estimated memory for in-core solution"?
Many thanks for your reply!
Mesh convergence is impossible (will not converge, but diverge) at stress singularities (fixed constraints, point loads, sharp reentrant corners. Such stress singularities appear at the corners of a cantilever beam fully fixed on one end:
Yes, you should compare the "estimated memory for in-core solution" against your RAM size. Therefore you need at least 54 GB available, based on the out file you provided.
1
Answers
-
Log file? Error messages?
How about the node number of your model? Your hardware: RAM? CPU? OS?
0 -
Q.Nguyen-Dai said:
Log file? Error messages?
How about the node number of your model? Your hardware: RAM? CPU? OS?
Hi! many thanks for your quick aswer. There are no error messages, it's just stuck at 0.00%. When I run the same analysis with first order tetra (tet4) it works fine, but when I change to second order tetra (tet10), the analysis doesn't progress after more than 2 hours running...and it creates a file.rs of 30 gb and increasing...
I share my hypermesh model:
(The model is bigger than 15mb and I can't send this archive directly through archives attachments)
The node number is almost 600.000 nodes.
16 gb RAM, Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz 4 cores, windows 10 20h2 version, Nvidia GTX 1070 driver version 457.30.
Is it normal or maybe some bug?
0 -
Nobody know anything else about this topic? could it be the version of the software? is the mesh too much fine?
0 -
Hi,
your model is probably too big for your RAM (in core), so the solver uses disk space (out core), which is much slower.
The mesh is too fine for your hardware but also for practical purposes. We have to apply common sense with the finite element method which is inherently an approximation. Converging the mesh (discretization error) past a certain point does not make sense since boundary conditions and material property assumptions (modeling errors) usually introduce greater errors.For more details refer to:
Finite element analysis for design engineers by Paul Kurowski0 -
Álvaro Cuadrado_21918 said:
Nobody know anything else about this topic? could it be the version of the software? is the mesh too much fine?
the .out file from OptiStrut gives you an etimative of the ammount of memory (RAM) necessary for your model.
2nd order model need much more RAM than 1st. So take a look at the ammount of RAM needed and if your 16GB RAM covers what is needed, at least the Out of Core Memory.
Like mentioned by others here, sometimes your model is just too big.
Also bear in mind that you Windows and other apps already take a good part of your RAM, so you would probably have around 10GB free for running this, I'd guess.
0 -
Adriano Koga_20259 said:
the .out file from OptiStrut gives you an etimative of the ammount of memory (RAM) necessary for your model.
2nd order model need much more RAM than 1st. So take a look at the ammount of RAM needed and if your 16GB RAM covers what is needed, at least the Out of Core Memory.
Like mentioned by others here, sometimes your model is just too big.
Also bear in mind that you Windows and other apps already take a good part of your RAM, so you would probably have around 10GB free for running this, I'd guess.
Thanks for your reply. I have been monitoring the use of RAM through the run, and I have more or less 10gb free at the beginning of the process as you said. But when it's stuck at 0.00%, the maximum use of RAM is only 10gb, and there are 6gb free that the solver is not using.
What can i do to use this free RAM space?
I attach the "memory estimation information" through this run:
Reviewing this .out file, Do I just need the 10604 MB free to run the model? or do I need the sum of the memory estimation?
I have enough estimated memory for out of core solution, but my PC is not using this free RAM.
0 -
Simon Križnik said:
Hi,
your model is probably too big for your RAM (in core), so the solver uses disk space (out core), which is much slower.
The mesh is too fine for your hardware but also for practical purposes. We have to apply common sense with the finite element method which is inherently an approximation. Converging the mesh (discretization error) past a certain point does not make sense since boundary conditions and material property assumptions (modeling errors) usually introduce greater errors.For more details refer to:
Finite element analysis for design engineers by Paul KurowskiThanks for the book reference! I'll take into account.
I have refined the mesh up to 0.1mm because the convergence error between 0.8mm and 0.4mm was 7% between the stress results in another more complex model, but i'll try with a little big refined mesh to compare the results or maybe use a cluster.
Did you mean that I have to take into account the "estimated memory for in-core solution"?
Many thanks for your reply!
0 -
Álvaro Cuadrado_21918 said:
Thanks for the book reference! I'll take into account.
I have refined the mesh up to 0.1mm because the convergence error between 0.8mm and 0.4mm was 7% between the stress results in another more complex model, but i'll try with a little big refined mesh to compare the results or maybe use a cluster.
Did you mean that I have to take into account the "estimated memory for in-core solution"?
Many thanks for your reply!
Mesh convergence is impossible (will not converge, but diverge) at stress singularities (fixed constraints, point loads, sharp reentrant corners. Such stress singularities appear at the corners of a cantilever beam fully fixed on one end:
Yes, you should compare the "estimated memory for in-core solution" against your RAM size. Therefore you need at least 54 GB available, based on the out file you provided.
1 -
Simon Križnik said:
Mesh convergence is impossible (will not converge, but diverge) at stress singularities (fixed constraints, point loads, sharp reentrant corners. Such stress singularities appear at the corners of a cantilever beam fully fixed on one end:
Yes, you should compare the "estimated memory for in-core solution" against your RAM size. Therefore you need at least 54 GB available, based on the out file you provided.
Many thanks for the information and the link! Now I understand why it was stuck at 0.00%, then I'll try another way to do my analysis.
Sorry but maybe I didn't explain well what i wanted to do. This cantilever beam is just a model that I did to figure out what it was happening with the solver when I refine my mesh till 0.1mm with tet10, and for sure, in that case the stress results didn't converge to a value due to the stress singularities, but it's not the real model that I'm trying to analyse. In fact, the real model is an upright and the maximum stress isn't localized in any stress singularities points.
If I do a first order hexamesh (hex8) maybe this estimated RAM would be reduced and it'll converge with a higher mesh, what do you think? The issue here is that the geometry is a little big complex to do this type of mesh, so using a cluster could be a good idea or being satisfied with 7% convergence error.
0 -
Álvaro Cuadrado_21918 said:
Many thanks for the information and the link! Now I understand why it was stuck at 0.00%, then I'll try another way to do my analysis.
Sorry but maybe I didn't explain well what i wanted to do. This cantilever beam is just a model that I did to figure out what it was happening with the solver when I refine my mesh till 0.1mm with tet10, and for sure, in that case the stress results didn't converge to a value due to the stress singularities, but it's not the real model that I'm trying to analyse. In fact, the real model is an upright and the maximum stress isn't localized in any stress singularities points.
If I do a first order hexamesh (hex8) maybe this estimated RAM would be reduced and it'll converge with a higher mesh, what do you think? The issue here is that the geometry is a little big complex to do this type of mesh, so using a cluster could be a good idea or being satisfied with 7% convergence error.
Glad to help.
First-order elements will have reduced computational cost when compared with second-order elements, allowing more elements for the given RAM available. However, I think results should be similar for both first (more elements) and second-order (additional mid-side nodes) for the same RAM.
If geometry is too complex for hexahedral mesh, use second-order tetrahedral mesh, because first-order tetras may be too stiff in bending (similar issue to the first-order trias).
7% convergence error is not acceptable, can you show the offending area?
0 -
Simon Križnik said:
Glad to help.
First-order elements will have reduced computational cost when compared with second-order elements, allowing more elements for the given RAM available. However, I think results should be similar for both first (more elements) and second-order (additional mid-side nodes) for the same RAM.
If geometry is too complex for hexahedral mesh, use second-order tetrahedral mesh, because first-order tetras may be too stiff in bending (similar issue to the first-order trias).
7% convergence error is not acceptable, can you show the offending area?
I did a test with the same mesh refinement with first order and second order, and there is a big difference respect the stress results.
Of course, i'm going to send you a couple of images. The first one is a second order tetraedral mesh with a 0.4 mm mesh refinement, and the second one is a first order tetraedral mesh with a 0.1 mm mesh refinement.
The area of maximum stress is inside a 5mm radius.
Doing the analysis, with the second order mesh the value was increasing (248 Mpa 0.8 mm mesh refinement against 265 Mpa 0.4 mm mesh refinement), and when I did the first order analysis the value decrease to 241 MPa.
I'm going to try a higher mesh refinement with second order elements, because the first order wasn't give good results.
Yes, that error is not acceptable, so maybe use a cluster will be the solution.
0