Evaluating Numeric Prediction
mksaad
New Altair Community Member
Hello all,
Please consider the following:
=== Summary for Numeric Class Label ===
Correlation coefficient 0.961
Mean absolute error 1.0733
Root mean squared error 1.4616
Relative absolute error 22.909 %
Root relative squared error 27.9396 %
_________________________________
How can I judge my model ?! is it good?. Correlation measure reflects good performance, but Relative absolute and Root relative squared errors are high !
Best Regards,
--
Motaz K. Saad
http://motaz.saad.googlepages.com
Please consider the following:
=== Summary for Numeric Class Label ===
Correlation coefficient 0.961
Mean absolute error 1.0733
Root mean squared error 1.4616
Relative absolute error 22.909 %
Root relative squared error 27.9396 %
_________________________________
How can I judge my model ?! is it good?. Correlation measure reflects good performance, but Relative absolute and Root relative squared errors are high !
Best Regards,
--
Motaz K. Saad
http://motaz.saad.googlepages.com
Tagged:
0
Answers
-
Hi Motaz,
this seems to be no overwhelming good results. Try optimizing your parameters, generate features or change learning algorithm. This could help.
If you already did, its perhabs as good as it can be. Perhabs your attributes dont catch the real dependency between label and example.
Greetings,
Sebastian0 -
Hi Sebastian,
When Can say my model is good ????!!! ???
when Mean absolute error and Root mean squared error are less than 1 AND
Relative absolute error and Root relative squared error are less than 10% or 5% or .... for example
Is the number of instances affect the error measure values ?
I hope my question is clear !
Warm Greetings
--
Motaz0 -
Hi Motaz,
let me mention two points:
1. Absolute errors can only be interpreted meaningfully if you have an idea of the value range of your label. This becomes clear, when you imagine a label that has a mean value of 1. A model producing an error of 1 in this setting could only be called disastrous. If the label has a mean value of, say, 10,000 an absolute error of 1 would rather be quite good. Hence, you can not clearly judge an absolute error without explicitely knowing the value range of the real label. That is in fact the reason for using relative errors.
2. Even with relative error functions you have to take a lot of aspects into account, e.g. the complexity of your learning problem, the capability of the learner to solve problems of that kind, etc. Hence, there is no dogmatic approach to which relative error you have to achieve to valuate a models as good or bad. Additionally, it of course depends on your evaluation procedure, what error should or might be acceptable.
Regards,
Tobias0 -
Hello Tobias.
Thanks for your valuable points
I and working on time series dataset for temperature records for 2 years (726 instances). I know that I have insufficient dataset. Below is my class label information:
Minimum 6.254
Maximum 27.4
Mean 17.646
StdDev 5.235
I want to look for all error measures:
Can I say 1.0733, 1.4616 for Mean absolute error and Root mean squared error good result if the mean (average) value is 17.6 ?!
What is acceptable Mean absolute error and Root mean squared error for a numeric class with mean 17.6 ?
If I want to consider relative errors, what are the acceptable values ?
Worm Greetings,
--
Motaz
0