Classification by Regression Operator vs. Polynomial By Binomial Classification

mbuko
New Altair Community Member
Hi,
I have a multi class approach by using SVM (mySVM) and a operator to enable the multi classification. The problem is, that the two possible operators "Classification by Regression Operator" and "Polynomial By Binomial Classification" lead to different kind of results for the confidence values:
1) Classification by Regression Operator: confidence value element of (-∞, 1]. This seems to be the signed distance to the hyperplane. Is this correct? Why are there no values higher than 1? (1 would mean that it is on the edge of the margin. Might it depend on the kernel function?)
2) Polynomial By Binomial Classification: confidence value element of [0,1]. Is this any kind of probability? Definition?
Unfortunately, I don't find any hint how the confidence values are defined (with regard to the used operators or SVM implementation).
In order to use RapidMiner and the output, I need a clear understanding of the parameters, confidence values and dependencies to the RapidMiner operators.
I would be pleased, if you would help me with these issues!
Best regards,
Mark
PS: I have already opened a similar thread in another category (https://rapid-i.com/rapidforum/index.php/topic,9418.msg31536.html)
I have a multi class approach by using SVM (mySVM) and a operator to enable the multi classification. The problem is, that the two possible operators "Classification by Regression Operator" and "Polynomial By Binomial Classification" lead to different kind of results for the confidence values:
1) Classification by Regression Operator: confidence value element of (-∞, 1]. This seems to be the signed distance to the hyperplane. Is this correct? Why are there no values higher than 1? (1 would mean that it is on the edge of the margin. Might it depend on the kernel function?)
2) Polynomial By Binomial Classification: confidence value element of [0,1]. Is this any kind of probability? Definition?
Unfortunately, I don't find any hint how the confidence values are defined (with regard to the used operators or SVM implementation).
In order to use RapidMiner and the output, I need a clear understanding of the parameters, confidence values and dependencies to the RapidMiner operators.
I would be pleased, if you would help me with these issues!
Best regards,
Mark
PS: I have already opened a similar thread in another category (https://rapid-i.com/rapidforum/index.php/topic,9418.msg31536.html)
Tagged:
0
Answers
-
confidences should usually be [0,1]. So the case in 1) seems odd to me. I never used classification by regression though.
~martin0 -
@Martin thank you for your reply!
Unfortunately, a clear description and definition of the output and operators are missing, but I need it in order to work scientifically correct. I am looking forward to an official answer from RapidMiner, but at the moment there is an absence of any reaction.0 -
Hi,
i guess there will be more offical answer than my postings, except if you ask at support.rapidminer.com, but this requieres a licence. I am employed at RapidMiner; That makes it kind of offical?
What you can do is have a look at the source. The classification by Regression operator can be found at:
https://github.com/rapidminer/rapidminer-studio/blob/master/src/main/java/com/rapidminer/operator/learner/meta/ClassificationByRegression.java
There we find the comment:
I checked the attached example process. Apperently confidence(Mine) is [0,1] but confidence(Rock) not. For some reason only one confidence is set "correctly". I will check the code deeper once I have more time
/**
* For a classified dataset (with possibly more than two classes) builds a classifier using a
* regression method which is specified by the inner operator. For each class {@rapidminer.math i} a
* regression model is trained after setting the label to {@rapidminer.math +1} if the label equals
* {@rapidminer.math i} and to {@rapidminer.math -1} if it is not. Then the regression models are
* combined into a classification model. In order to determine the prediction for an unlabeled
* example, all models are applied and the class belonging to the regression model which predicts
* the greatest value is chosen.
*
* @author Ingo Mierswa, Simon Fischer
*/
~Martin0 -
@Martin, thank you for your help!
The comment is also written in the documentation. I have the same problem with the different confidence outputs: only one is set "correctly" (but only with 1 ?).
What is the definition of the confidence value if it is in [0,1]? (I wanted to avoid looking into the implementations, but it seems to be necessary)0 -
usually this is a measure on how much the algorithm trusts it's own calculation. Usually the higher the value the more likely is the algorithm to be right.
The value is calculated for every algorithm differently. For a k-nn it is the fraction of neighbours with this class (in the unweighted case). For the SVM the value depends on the distance to the seperating hyperplane.0 -
- usually yes, but it also depends on the SVM implementation (mySVM vs. libSVM) and the wrapping operators like the two mentioned in the thread's title."For the SVM the value depends on the distance to the seperating hyperplane"
Why are the confidence values for the same SVM algorithm but different operators within a different range (which is not documented)?
In the case of libSVM there is the possibility to estimate the probability as confidence instead of the distance to the hyperplane. Unfortunately, the specific meanings of the confidence values regarding the algorithms etc. are not documented.
I think I have to look into the implementions since the academic support does not help either.
Thanks so far, I would be very pleased about any further support.
0 -
Hello together,
Hello @mschmitz
does anybody know on which scientific base the ClassficationbyRegression-operator was implemented? Is there a paper which describes how this ensemble approach is working in detail despite the above mentioned git-code?
Thank you in advance for your answer!
Best regards!
0 -
Hi @Muhammed_Fatih_ ,sorry, I don't have any reference, but it seems to be just a common "trick" which is used all over the place?Best,Martin0
-