Altair RISE
A program to recognize and reward our most engaged community members
Nominate Yourself Now!
Home
Discussions
Community Q&A
Do we have the balanced error rate (BER)
DocMusher
Dear RM friends,
Perhaps I am unable to find this or it is not available or (a useless metric), but does RM provides the BER as a balanced metric that equally weights errors in Sensitivity and Specificity.
All feedback is appreciated.
Sven
Find more posts tagged with
AI Studio
Classification
Errors
Performance
Accepted answers
sgenzer
hi
@DocMusher
hmm never heard of it.
Doesn't seem to hard to calculate, though.
IngoRM
Nope, this is not supported by any of the standard performance operators. But since we deliver SEN and SPE it is indeed relatively easy to derive this as Scott has mentioned. I have to admit that I have only encountered this measurement a handful of times in 20 years now so this may serve as a little comment about the usefulness
varunm1
Thats the trick part with research articles, some times people tend to be different and use some uncommon performance metrics. I used to search google when encountered with unheard metric while reviewing papers.There are also some styles which researchers follow, for example, in all my publications, I report kappa metric for performance, this is something I follow personally.
All comments
sgenzer
hi
@DocMusher
hmm never heard of it.
Doesn't seem to hard to calculate, though.
IngoRM
Nope, this is not supported by any of the standard performance operators. But since we deliver SEN and SPE it is indeed relatively easy to derive this as Scott has mentioned. I have to admit that I have only encountered this measurement a handful of times in 20 years now so this may serve as a little comment about the usefulness
DocMusher
I just read it in this recent paper where it was used without further explanation but as THE performance feature? For what is worth.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0226962
MartinLiebig
@DocMusher
,
if you need it, i'll write the operator for you. Maybe on Tuesday if we get the time. #RealHackathon.
Best,
Martin
DocMusher
So kind dear Martin, I just try to understand why it received so much attention in that PLOSOne paper while I think they could have used other features. I already responded on the PLOS website because I think they did not take care of te contamination examples published by Ingo a while ago (
https://rapidminer.com/resource/correct-model-validation/
)
Interested in your recent activities!!
Sven
varunm1
Thats the trick part with research articles, some times people tend to be different and use some uncommon performance metrics. I used to search google when encountered with unheard metric while reviewing papers.There are also some styles which researchers follow, for example, in all my publications, I report kappa metric for performance, this is something I follow personally.
DocMusher
Thank you for placing this in a perspective.
Sven
Quick Links
All Categories
Recent Discussions
Activity
Unanswered
日本語 (Japanese)
한국어(Korean)
Groups