Hi there,
i'm going to analyze a big set of data. Actually i got like 13.000.000 features on ca 1200 samples (positive and negative). This makes all together a file of 16.000.000.000 values when i create a big matrix (in float values = 64GB - from my point of view this is a lot

). I want to throw some machine learning algorithms on that (to find any classification into +/-). What about rapidminer's behavior on such big data? Can it handle this without running into death? I'd just like to know before i start to work into this software.
Anyone has experience with datasets which are.... lets say "much larger than the RAM" ?
Thanks