Hi all,
Just got through this video, which was incredibly helpful in establishing a procedure for using DT/NODA/CST:
I am currently working on something for which I hope to model approximately 20 seconds of simulation time, possibly more. The video has the following process:
1. Get the time step without any mass scaling
2. Use DT/NODA/CST with a Tsca of 0.9 and a time step of 1.2*Original Time step
3. Ensure mass error DM/M < 0.02
4. Keep going up by 1.2x until you see DM/M approach 0.02 (For safety, use 0.016)
My question is: You use the 0.016 value because mass error tends to increase over the simulation, and you don't know by how much. I've noticed most explicit analysis simulations run about 0.05 seconds, though maybe a bit longer for crash analysis. In anyone's experience, have they run a simulation that is something like 20 seconds and experienced a mass error increase much higher than 0.004 from start to finish? My simulations take hours to run, and I prefer not to wait to see how much error I end up with.
Tl;dr: Doe s simulation length affect how much room for error you should allow? Essentially, does mass error increase over the simulation linearly?
EDIT: I was also wondering - in the attached files I'm running a simulation with an appropriately selected AMS, and it doesn't seem to be affecting the time step or simulation speed in any meaningful way... I even purposely tried to put in a ridiculously large imposed time step of 200 seconds and even 0 seconds to try to break it, and it didn't change a thing. Can anyone help me with this?
Thanks
Unable to find an attachment - read this blog