Outlier detection algorithms comparison
zzM
New Altair Community Member
Hello, I'm new to RapidMiner and I'm kinda having a little bit of troubles here.
I'm trying to compare outlier detection algorithms such as LOF LoOP in terms of performance... and I have no clue how to do it.
I'm trying to compare outlier detection algorithms such as LOF LoOP in terms of performance... and I have no clue how to do it.
Tagged:
0
Best Answers
-
Hi @zzM,
It is not possible to get the performance of unsupervised outlier detection, if we have no label for the ground truth.
Check out this research paper for a comprehensive overview of the anomaly detection models which are available in anomaly detection extension
YY1 -
Without a binary classification problem that has a priori answers (the label) to which you are comparing a prediction (the score), it is not possible to produce the ROC/AUC performance metric. So the only way to produce that would be to separately label all cases as to whether they were in fact outliers in your opinion based on whatever criteria you are using, and then treat the output from the different outlier algorithms as though they were predictive models. This is the main difference between supervised and unsupervised machine learning problems, which is what @yyhuang was talking about before. So the short answer to your question is "not unless you dramatically change the nature of the problem."2
Answers
-
Hi @zzM,
It is not possible to get the performance of unsupervised outlier detection, if we have no label for the ground truth.
Check out this research paper for a comprehensive overview of the anomaly detection models which are available in anomaly detection extension
YY1 -
Without a binary classification problem that has a priori answers (the label) to which you are comparing a prediction (the score), it is not possible to produce the ROC/AUC performance metric. So the only way to produce that would be to separately label all cases as to whether they were in fact outliers in your opinion based on whatever criteria you are using, and then treat the output from the different outlier algorithms as though they were predictive models. This is the main difference between supervised and unsupervised machine learning problems, which is what @yyhuang was talking about before. So the short answer to your question is "not unless you dramatically change the nature of the problem."2