Multiple Cores for Hypermesh Processes
For Mesh Generation/Optimization/everything else that is a pre-process operation that actually occurs within hypermesh and not the solver:
How do I utilize all of my CPU cores to power through mesh generation and mesh Optimization? I'm unfortunately stuck with an incredibly high element count and it's taking in excess of 2 hours to optimize the mesh. Any recommendations?
Answers
-
I don't know if these can be parallel processeed. You can use multi-cpus for batch meshing purposes, but i believe this is one of the exceptions.
But taking a look at this and other topics, make me curious why such a huge mesh is necessary, just for curiosity.
0 -
Certainly! If you have recommendations on how to proceed, I would also appreciate them /emoticons/default_smile.png' srcset='/emoticons/smile@2x.png 2x' title=':)' width='20' />
I'm doing a Mesh Convergence Analysis on a unit cell of a Carbon Fiber Composite that has carbon nanofibers woven in a perpendicular direction of the much larger carbon fibers. I'm attempting to verify thermal conductivity using an FEA model against lab measurements. And my university advisor has been *far* less helpful than I had expected. This thesis was originally pitched to me as a project that I could send off to a super computer and get processed; however, after talking with the IT guy who runs the super computer, it would appear that the only way to do it is to export to OpenFOAM (their machine is linux) and do it that way, which I have absolutely no idea how to even begin to attempt.
The diameter of the larger carbon fibers is 7.1 um, whereas the diameter of the nanofibers is 0.1 um. The problem is where the mesh descends upon the carbon nanofiber as it is sandwiched between the larger fibers. I can't adjust geometries, as this is the same setup that was used in the lab, so I'm unfortunately stuck with the volume fractions of the carbon fibers. I believe the distance between the two larger fibers is .314 and the nanofiber splits that distance by 2, putting a huge gradient on mesh size as it approaches that bottleneck. In order to model realistic boundary conditions, I have to 'surround' that RVE with at least a quarter cell of itself, so I end up with a 2x2 model. Originally I attempted to do a quarter analysis for mesh convergence and then mirror that quarter mesh 16 times to get the same results, but the few sims that I did were showing very nasty 'hot spots'
Previous work has been done on this modeling mechanical behavior. That study provided simulations up to 30 million nodes large... so I'm assuming that the element count was upwards of 1 billion. The guy that did it worked at a very well-known large company who uses hypermesh for all of their simulations. I'm suspecting that the way that he did his analysis was that he 'borrowed' company resources to process his model, as he worked in their FEA department to begin with. I've contacted my advisor numerous times about this, and I've been essentially given the 'I dunno' response.
I purchased materials to build a personal computer that might limp through a convergence analysis out-of-core. 128GB RAM paired with 8 cores clocked at 3.7 Ghz. I'm hoping that I can see convergence before I exceed all RAM requirements.<?xml version="1.0" encoding="UTF-8"?>
0 -
For interactive session of Hypermesh/Hyperview I don't think you can run more than one core per session.
For batch task, maybe. I never try it.
0 -
as you have this geometry i would recommend you trying SimLab for generating this mesh, as for Tetra elements the mesh transition is usually better than HyperMesh. I would take a shot with SL in this case. I would say SL can do a much better modeling for you. Take a look at the SimLab section in the Forum and hte SL Learning Center.
By the way this is very similar to what Altair MultiScale Designer does. (base program developed by Prof. Jacob Fish, Columbia).
You develop a unit cell and then create a reduced order model to take this unit cell to a macro scale model.
0 -
just as a clarification, SL allows you to have a smoother transition from coarse to finer mesh, with less effort.
As good HM user would be able to get this, but with some effort. Usually SL can give you a good quality mesh with a smooth transition, resulting in less elements overall.
0 -
Given the large disparity in diameter, try modeling the fibers and matrix with 3D solid elements and nanofibers with 1D beam or truss elements. The challenge is to extract the midline of nanofibers and seed beam/truss element nodes to be coincident with the solid mesh nodes. This approach would greatly reduce element count at the expense of neglecting through thickness thermal gradient in nanofibers, due to 1D element simplification.
0 -
Is it a full tetra model?
How small its mesh size do you need?
0 -
/profile/44175-simon-kri%C5%BEnik/?do=hovercard' data-mentionid='44175' href='<___base_url___>/profile/44175-simon-kri%C5%BEnik/' rel=''>@Simon Križnik
So, I tried looking at the 1D beam approach but I have concerns that the through-thickness transfer is necessary. It would not hurt to run that sim and compare it to a fully 3D model though. If there is negligible difference, that would be a huge plus to modeling. I will try that and compare it to the three runs that I have already done to see if there is much of a difference before I go any further.
/profile/97129-adriano-a-koga/?do=hovercard' data-mentionid='97129' href='<___base_url___>/profile/97129-adriano-a-koga/' rel=''>@Adriano A. Koga
Launching Simlab now. Thank you for the recommendation. I hope that this gets me where I need to be.
/profile/3195-tinh/?do=hovercard' data-mentionid='3195' href='<___base_url___>/profile/3195-tinh/' rel=''>@tinh
Yes, it is a full tetra model. I cannot answer the question of how small the mesh needs to be. I'll just have to keep making it smaller until max heat flux values start to converge to a single value.0 -
If your model contains more than one part, you can use parallel core in SimLab .
0