RapidMiner Server: memory management for repository access
Hi all, hi Marco!
We were wondering how RapidMiner Server handles the reading and writing of big big objects from and to the Server Repository.
Say, we write an ExampleSet or a big model (e.g. a complex RandomForest model) of 2 GB to the Server repository. Does the Server cache the complete object in memory, or does it stream to the database? What when we read it back?
In other words: if the memory of the server is restricted to 2 GB, can we still reliably store bigger objects in the repository? (whether this is good practive is another question, but sometimes you have no choice...)
Also, does accessing the repository count against the api limit of the free RapidMiner Server, or does the api limit only apply to processes that are exposed as a webservice?
Cheers,
Marius