Find more posts tagged with
Sort by:
1 - 9 of
91
The H20 examples from the link have their seed set to 1234. Set the seed to the same in Rapidminer. You would want to leave your layers and epoch the same and be 100% certain that you are spitting your data in exactly the same way.
If that does not fix your problem then it is safe to assume that there are differences under the hood between versions. Do you have to use H20?
If that does not fix your problem then it is safe to assume that there are differences under the hood between versions. Do you have to use H20?
Hello there,
I think the main reason you're experiencing such a difference between the two models is that they are using a very different version of H2O. Right now, models in RapidMiner that use H2O under the hood (Deep Learning being one of them) are running with a dated version of the library. On another note, H2O does not prioritize compatibility of models between two releases, so it is very muxh expected that models built with two different versions of the library produce different results.
The RapidMiner engineering team is currently working on upgrading the library to the most recent stable version, so you can expect that improvement soon. But this is not a continuous stream of updates, so identical behavior can only be expected until the next stable version of the h2o library is released.
What you could do when comparing the two is to use the exact same H2O library version with Python as the one used within RapidMiner.
Hope this helps,
Tamas
Dear everyone.
I am really sorry.
I want to delete this post, but there is no way to delete it.
There is seriously misunderstanding.
I made important mistake in calculating MAPE.
I said above h2o operator in rapidminer is excellent, which turned out to be "No"
that is, h2o operator in rapidminer is poorer than tensorflow deep learning, which I checked now.
Sorry for all the misunderstanding and confusion.
Also thank you for your comment above from all of you.
So, as it is said above, h2o operator in rapidminer has different version than the one in python
Also there were good comments and advise, knowledge from all of you
Thank you for those and have a nice day.
Thanks
to hughesfleming68:
have a nice weekend and see you~
It is common to see a drop in performance when you take away the randomness. The other alternative would be to average the results over multiple runs. A lot of these problems are data dependent but common to all deep learning operators.