"Raise again. What's the algorithm for optimistic AUC"

funnyhat
funnyhat New Altair Community Member
edited November 5 in Community Q&A
Dear everyone,

I am near to crazy now about this question. How does optimistic AUC was calculated? why sometimes the optimistic AUC has a huge difference than pressive one? Which one should we use? AUC is a very important parameter to evaluate a model. but the unclear results made it almost impossible. Thanks

Answers

  • land
    land New Altair Community Member
    Hi,
    for calculating the AUC criterion one has to measure the area under the (ROC) curve. This curve is based upon the sorted confidence values of the classifications outcome. You will start with the highest confidence and if it's predicted wrongly, you make a step to the right. If it's correctly predicted you make a step upwards.
    This is common for all AUC measures, but they differ in the way how they deal with examples that has been classified with the same confidence. The optimistic will first take the correctly predicted into account, the pessimistic first the wrongly predicted. The first approach will yield a curve that goes up and then to the right, while the second results in going right first and then upwards. But this causes a difference in the area under the curve!
    The third way in the middle would be to draw a diagonal.

    The visual curve can be either seen in the performance vector's renderer view, or if you use the operator "compare ROCs".

    Greetings,
      Sebastian