(Solved?) Scripting in HyperStudy to simplify study setup / Scripting DOEs (including output responses from h3d and T01 files)
Hello,
I am currently using DOEs in HyperStudy to run a number of simulations with a set of varying parameters. I use HyperMesh to create the .hm model and use Radioss as the solver. Due to the length of time one model takes to run, my process to set up a study in HS usually goes: import .hm model; define input variables; modify the runtime to something short (just long enough to create a .h3d file and some TH data); then, once the “run definition” step is complete, define the output responses using those files (.h3d and T01). When the setup has been complete, I’ll modify the runtime to the actual time I’m interested in simulating and then create and run the DOE.
This process is not ideal for setting up and running DOEs of multiple different .hm files because it is quite tedious and the time spent waiting for things to load/open adds up quite quickly. I would like to simplify the process, probably by writing a script(s), though HS does not appear to have the same .tcl script advantages that HM does (please correct me if this observation is incorrect).
I have looked at the command1.tcl file created by past HS DOEs I have run and I think I might be able to make use of the commands present (*feoutputwithdata, *feoutputmergeincludefiles, *setvalue parameters, etc.) with some additional ones (*writeh3dwithoptions, etc.) to create my own script which writes a .tcl file that performs the same things that a HS DOE does (at least, in terms of running the models with different values for parameters). However, HS also has the advantage of calculating and summarizing the output response results across all runs, and allowing for export to .xlsx. I post-process this data outside of HS but would need my script (or a secondary script) to grab the data associated with specific output responses at some point.
Thus, my first question is: is there any way I can streamline creating multiple DOEs in HS (they all need to be separate studies – I am using different geometries in the .hm models - and the "run definition" model will have a different runtime than the actual DOE runs), keeping in mind the runtime in the model needs to be modified between setup and the actual DOE runs.
Then, if this is not possible (or doesn’t immediately seem to have a reasonable solution), does it seem feasible to write my own .tcl script (using commands from command1.tcl created by HyperStudy) to mimic what HS does for DOEs? And, is there a solution to automating the collection of data from .h3d and T01 files for specific output responses (I am thinking along the lines of writing something to load the .h3d and T01 files in HyperView, extract and save the data somewhere, close the files, and repeat). With what I know of .tcl scripts and my basic knowledge of Python, I think it’s probably possible to put something together, but maybe you can see something I have neglected that makes this method not feasible (and thus I should just stick with the original, tedious method).
There’s quite a bit here, so I appreciate your time and any suggestions or help you can provide, even if for just a portion of the problem I am attempting to tackle.
Please do let me know if anything requires any clarification.
Thank you in advance.
Answers
-
For anyone stumbling across this question in the future, I ended up concluding that HS was not flexible enough for what I was hoping to do and instead wrote my own .py, .tcl, .cmf, and .oml scripts to mimic the functionality of HS, but also allow for some additional features (e.g. reduce the amount of set-up time, since I don't need to wait for the "run definition" to complete).
I won't post what I did end up writing, since it was created for my own specific purposes and would be overwhelming to anyone reading it for the first time, but I will give a brief overview, so that others know how I went about solving the problem.
> I had a "master.py" script which wrote a .tcl script for every .stp filename within a specific folder, and had a line in a .cmf file for each of those .tcl scripts
> I could then run the .cmf file, which ran all my .tcl scripts
>> These scripts would open a "base" .hm model, import geometry from the .stp file, mesh, create node sets, etc. and then save all of these new .hm files. I made use of commands such as *geomimport, *createmark & hm_createmark, *tetmesh, eval & set, *setvalue... and got the general layout by checking the command1.tcl created and updated by HM after any action.
> I had a "folders.py" script which created a consistent folder structure in a specific location and then wrote new .tcl scripts for each .hm file from the previous step
> These new .tcl scripts were run in HM and modified a set of parameters, before exporting the 0000.rad and 0001.rad files (and then repeating for a new set of parameters - this step will allow for mimicking the DoE of HS, in that the .tcl scripts runs through a changing set of parameters and creates a new 0000.rad and 0001.rad files for each)
>> For the commands in the .tcl for modifying parameters, I found *setvalue parameters to be useful. And for exporting the .rad files, *feoutputwithdata was useful.
> I could then run all of the 0000.rad files in Radioss using the Compute Console.
> Once I had .h3d and T01 for each unique 0000.rad file, I ran a "calculate.oml" script to post-process the data in Compose. This script linked to .py functions I wrote, and output numerical data into a "master.csv" file. It also output vectors of data into .csv files uniquely named to match each 0000.rad file. The ability to output this vector data from Compose is another reason I didn't end up using HS - I could not figure out how to output a vector of data (say, the interface force values over time).
It was a bit time-consuming to write the scripts myself, but it ended up streamlining the process - once I created a "base" HM model and figured out the .tcl commands necessary to import geometry, mesh, etc., I was able to turn ~60 geometry .stp files into their own "DoEs" with 16 parameter sets quite quickly. And the Compose .oml script printed main results into a single .csv file, with vector data in unique .csv files (whereas I would've needed to have created a study in HS, waited for "run definition", and output results to .csv/.xlsx for each geometry, with no option to save vector data (that I am aware of)).
0 -
Hi Autumn,
Your solution sounds impressive. There certainly is more flexibility by creating your own scripts and processes, and maybe that was the best solution.
I think things would have been possible in Hyperstudy, but it probably would require some creativity. Hyperstudy outputs/data sources can reference compose functions, so I believe you could leverage that to write any necessary vectors to a csv. Then regarding creating multiple DOE's, you can use hyperstudy to control another hyperstudy via batch mode. I have used this in the past. So if you create a base DOE in hyperstudy, then control, modify, and launch it via a higher level hyperstudy, I think that may have worked.
0 -
bocaj22 said:
Hi Autumn,
Your solution sounds impressive. There certainly is more flexibility by creating your own scripts and processes, and maybe that was the best solution.
I think things would have been possible in Hyperstudy, but it probably would require some creativity. Hyperstudy outputs/data sources can reference compose functions, so I believe you could leverage that to write any necessary vectors to a csv. Then regarding creating multiple DOE's, you can use hyperstudy to control another hyperstudy via batch mode. I have used this in the past. So if you create a base DOE in hyperstudy, then control, modify, and launch it via a higher level hyperstudy, I think that may have worked.
It's been a while since I was debating HS vs scripting, but I don't think I had thought of forcing HS to output vectors in a .csv file by referencing Compose functions, though this is pretty much the way I went within Compose itself haha
I'm not familiar with HS via Batch Mode (though I had heard of it way back when), and it does sound like it would've helped run multiple DOEs, though an additional problem was that within a single DOE I would (1) need to wait for the "run definition" to complete before I could set outputs/result data and (2) not necessarily want each run of the DOE to have the same run time. For issue #1, I had been initially setting the runtime quite short, allowing the run definition to complete, and then modifying the runtime to be quite a bit longer for the actual DOE (which was quite manual). For issue #2, there didn't seem to be a solution and instead, each run was going for the same (longer) runtime, which was a waste of time for some runs (a velocity was changing quite significantly between DOE runs, which would increase/decrease the impact event, and I wouldn't need the same runtime for a 1m/s velocity as a 100m/s velocity). With scripting, as you mentioned, I was able to gain some of this flexibility I was looking for - I didn't need a preliminary "run definition" to define outputs/results and within a single DOE each run could have a unique runtime (and DT for ANIM and TH) based on the velocity of the impactor.
I'm curious though - I had originally written three Python functions that would each essentially (1) gather a bunch of vector data, (2) perform some math, and (3) output one of three values, say, A, B, or C, depending on the function. Each function had the same first two steps, and the final step was what was different. In HS, this meant I referenced each Python function once to get the values A, B, and C, but steps (1) and (2) - which did take quite some processing time - were really being repeated three times. Since HS only wants a single value returned, I couldn't seem to figure out how to only do steps (1) and (2) once - perhaps there would have been a way? Maybe outputting the three values to a csv, then reading in one of the three cells depending on the result desired... as you say, the HS route does seem possible, though would require some creativity and maybe a bit of a roundabout approach. Though, knowing how long it took to write and to troubleshoot my scripts I'm not so sure which would've been the better approach - the answer is probably a bit relative.
0 -
Autumn said:
It's been a while since I was debating HS vs scripting, but I don't think I had thought of forcing HS to output vectors in a .csv file by referencing Compose functions, though this is pretty much the way I went within Compose itself haha
I'm not familiar with HS via Batch Mode (though I had heard of it way back when), and it does sound like it would've helped run multiple DOEs, though an additional problem was that within a single DOE I would (1) need to wait for the "run definition" to complete before I could set outputs/result data and (2) not necessarily want each run of the DOE to have the same run time. For issue #1, I had been initially setting the runtime quite short, allowing the run definition to complete, and then modifying the runtime to be quite a bit longer for the actual DOE (which was quite manual). For issue #2, there didn't seem to be a solution and instead, each run was going for the same (longer) runtime, which was a waste of time for some runs (a velocity was changing quite significantly between DOE runs, which would increase/decrease the impact event, and I wouldn't need the same runtime for a 1m/s velocity as a 100m/s velocity). With scripting, as you mentioned, I was able to gain some of this flexibility I was looking for - I didn't need a preliminary "run definition" to define outputs/results and within a single DOE each run could have a unique runtime (and DT for ANIM and TH) based on the velocity of the impactor.
I'm curious though - I had originally written three Python functions that would each essentially (1) gather a bunch of vector data, (2) perform some math, and (3) output one of three values, say, A, B, or C, depending on the function. Each function had the same first two steps, and the final step was what was different. In HS, this meant I referenced each Python function once to get the values A, B, and C, but steps (1) and (2) - which did take quite some processing time - were really being repeated three times. Since HS only wants a single value returned, I couldn't seem to figure out how to only do steps (1) and (2) once - perhaps there would have been a way? Maybe outputting the three values to a csv, then reading in one of the three cells depending on the result desired... as you say, the HS route does seem possible, though would require some creativity and maybe a bit of a roundabout approach. Though, knowing how long it took to write and to troubleshoot my scripts I'm not so sure which would've been the better approach - the answer is probably a bit relative.
As long as the radioss outputs are the same across models, you should be able to keep the hyperstudy outputs the same when updating/changing the model. Unless I misunderstood.
Regarding the run time, this is another place where you might need to be creative. Here you could create a tpl version of your radioss engine file and make the run time a function of the velocity variable in hyperstudy.I like to use hyperstudy whenever possible, but have it reference scripts where necessary. This could be a combination of compose functions called within hyperstudy or tcl scripts run via hypermesh in batch mode. Of course, you can also run any script via the model definition by having the "solver" be python or tcl.
Another thing I forgot to mention was that "data sources" can be vectors and call compose functions. A new data source can reference other data sources as well. Only the inputs and output responses have to be scalars. So I would stick with doing the math within the data sources in your scenario.
1 -
bocaj22 said:
As long as the radioss outputs are the same across models, you should be able to keep the hyperstudy outputs the same when updating/changing the model. Unless I misunderstood.
Regarding the run time, this is another place where you might need to be creative. Here you could create a tpl version of your radioss engine file and make the run time a function of the velocity variable in hyperstudy.I like to use hyperstudy whenever possible, but have it reference scripts where necessary. This could be a combination of compose functions called within hyperstudy or tcl scripts run via hypermesh in batch mode. Of course, you can also run any script via the model definition by having the "solver" be python or tcl.
Another thing I forgot to mention was that "data sources" can be vectors and call compose functions. A new data source can reference other data sources as well. Only the inputs and output responses have to be scalars. So I would stick with doing the math within the data sources in your scenario.
Hmm the Radioss outputs don't change, but I was under the impression that I would be creating a new HS for each model (each model is essentially the same, with minor geometry changes which result in changes to the solid elements/nodes belonging to a specific part) and this would then require a new "run definition" for each. I think within the same HS, it's possible to create a DOE with the setup based on an already-run run definition, but I'm not familiar enough with the program to know of any issues/limitations in creating a new DOE and changing the HM model it references.
I've not heard of tpl, so I'll have to go look into it - perhaps it'll be useful for something down the road. Thanks for the tip! (I was originally hoping to have the run time in the ENG_RUN card be a parameter in HM to solve this issue but, alas, the program wouldn't permit it).
> Of course, you can also run any script via the model definition by having the "solver" be python or tcl.
> Another thing I forgot to mention was that "data sources" can be vectors and call compose functions. [...] So I would stick with doing the math within the data sources in your scenario.
These are both useful tips as well - thank you! Having created a data source calling the Compose function which would run steps (1) and (2) as outlined above and output all three values A, B, and C and then only grabbed one based on what I wanted for my output seems like it would've been a solution to having three Python functions which all performed essentially the same thing.
Now that I am more familiar with Compose and scripting, it would probably make sense to be using a combination of HS and scripting. A number of months ago the learning curve for Compose was quite daunting and I had been hoping HS alone (and perhaps with Python) would allow me to avoid going down that path (and then learning Compose was enough of an effort to avoid trying to figure out how to also combine it with HS).
0 -
Autumn said:
Hmm the Radioss outputs don't change, but I was under the impression that I would be creating a new HS for each model (each model is essentially the same, with minor geometry changes which result in changes to the solid elements/nodes belonging to a specific part) and this would then require a new "run definition" for each. I think within the same HS, it's possible to create a DOE with the setup based on an already-run run definition, but I'm not familiar enough with the program to know of any issues/limitations in creating a new DOE and changing the HM model it references.
I've not heard of tpl, so I'll have to go look into it - perhaps it'll be useful for something down the road. Thanks for the tip! (I was originally hoping to have the run time in the ENG_RUN card be a parameter in HM to solve this issue but, alas, the program wouldn't permit it).
> Of course, you can also run any script via the model definition by having the "solver" be python or tcl.
> Another thing I forgot to mention was that "data sources" can be vectors and call compose functions. [...] So I would stick with doing the math within the data sources in your scenario.
These are both useful tips as well - thank you! Having created a data source calling the Compose function which would run steps (1) and (2) as outlined above and output all three values A, B, and C and then only grabbed one based on what I wanted for my output seems like it would've been a solution to having three Python functions which all performed essentially the same thing.
Now that I am more familiar with Compose and scripting, it would probably make sense to be using a combination of HS and scripting. A number of months ago the learning curve for Compose was quite daunting and I had been hoping HS alone (and perhaps with Python) would allow me to avoid going down that path (and then learning Compose was enough of an effort to avoid trying to figure out how to also combine it with HS).
Glad it was helpful. It sounds like your approach was a useful exercise regardless. I agree that Compose+Hyperstudy is often the most efficient approach.
Using tpl files in hyperstudy can be very powerful, because it makes any portion of a text file a variable within your hyperstudy. My usual approach for running Radioss in hyperstudy, for example, would be to create a tpl of the solver deck (.rad file, which is just an ascii text file), then assign variables within. There should be examples in the Hyperstudy documentation.
1 -
bocaj22 said:
Glad it was helpful. It sounds like your approach was a useful exercise regardless. I agree that Compose+Hyperstudy is often the most efficient approach.
Using tpl files in hyperstudy can be very powerful, because it makes any portion of a text file a variable within your hyperstudy. My usual approach for running Radioss in hyperstudy, for example, would be to create a tpl of the solver deck (.rad file, which is just an ascii text file), then assign variables within. There should be examples in the Hyperstudy documentation.
I'll go take a look at the HS documentation for this - thanks!
0