hi,
I tried a log10-Transformation no my right-skewed dataset, and trained / tested it again with a LibSVM. The results were staggering me, as it is a quite difficult dataset. But the results were 2.5 -3 % better than with not transformed dataset (from 84-85 to 87.6 % better performance..). I also standardized my datasets prior...
how can that be? I mean SVM does not make any distribution assumptions like a GLM or does it?
it would just correspond to a different Kernel function right? I used RBF-kernel, then it would be a RBF-Kernel with ||log(x)-log(x*)|| in the numerator of the rbf kernel function, right?