A program to recognize and reward our most engaged community members
Our data set have more than 100000 records , I reduced the sample size to 30000 if I further reduce the data set to say like 3000 then the sample representation is too small for model training.
I have tried running the full data set in Python applying 2-3 different algorithms and its giving me the results successfully. When I run outlier detection models on python it do not give out of memory issue but in Rapid Miner with relatively smaller data set too it goes out of memory. Why is Automodel –Outlier Detection failing on relatively mid size data sets?