Platt Scaling

steffen
steffen New Altair Community Member
edited November 5 in Community Q&A
Hello RapidMiner Community

I want to remark something about the Platt Scaling operator:
The original algorithmn of Platt (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1639) suggests to use 3-CV to learn the calibration function, i.e. to use the output of XVPrediction to learn the function. This should reduce the overfitting additionally to the correction of the labels.

I first thought that the Platt Scaling Operator needs the model and an ExampleSet to create a "sequential" model only, but diving into the sourcecode I found that you actually apply the model on the ExampleSet to create a calibration set. A model representing the output of XVPrediction cannot be constructed (easily), so one cannot apply Platt's way to prevent overfitting.

The code change is not the problem ... but maybe your experience tells that such an additional step is not necessary. Maybe I am just playing the role of nitpicker ;). But if someone has to use the original operator to perform comparison tests, one might fall into this trap.

just a remark, as I said...

regards,

Steffen

PS: After a change in the sense of Platt, a learning process could look like this ... (not executable)
<operator name="Root" class="Process" expanded="yes">
    <operator name="ExampleSource" class="ExampleSource">
    </operator>
    <operator name="XValidation" class="XValidation" expanded="yes">
        <operator name="learn_model" class="OperatorChain" expanded="yes">
            <operator name="NaiveBayes" class="NaiveBayes">
                <parameter key="keep_example_set" value="true"/>
            </operator>
            <operator name="XVPrediction" class="XVPrediction" expanded="no">
                <parameter key="number_of_validations" value="3"/>
                <operator name="nb_for_calibration" class="NaiveBayes">
                    <parameter key="keep_example_set" value="true"/>
                </operator>
                <operator name="apply_for_calibration" class="OperatorChain" expanded="no">
                    <operator name="modelapply_for_calibration" class="ModelApplier">
                    </operator>
                    <operator name="dummy_for_calibration" class="ClassificationPerformance">
                    </operator>
                </operator>
            </operator>
            <operator name="PlattScaling" class="PlattScaling">
            </operator>
        </operator>
        <operator name="apply_model" class="OperatorChain" expanded="no">
            <operator name="ModelApplier" class="ModelApplier">
            </operator>
            <operator name="ClassificationPerformance" class="ClassificationPerformance">
                <parameter key="accuracy" value="true"/>
            </operator>
        </operator>
    </operator>
</operator>
Tagged:

Answers

  • Legacy User
    Legacy User New Altair Community Member
    Hi the forum,

    Is Platt scaling the same thing than multidimensional scaling ? I have found KNIME here : http://knime.org/index.html
    In KNIME there is a multidimensional scaling algorithm, detailed here : http://www.inf.uni-konstanz.de/algo/publications/bp-empmdsld-06.pdf

    Can RapidMiner do that ?

    Rocky.
  • land
    land New Altair Community Member
    Hi Rocky,
    if KNIME would have implemented Platt Scaling, they probably would have called it Platt Scaling. This seems to be a different algorithm and I'm not quite sure if they even aim at the same target...

    Greetings,
    Sebastian
  • steffen
    steffen New Altair Community Member
    Hello  Rocky, hello Sebastian

    Platt Scaling: Platt Scaling was invented to calibrate the output confidences of SVM but is (under some constraints) applicable on the output of other classification algorithmns, too. Calibration means = make the confidences a better approximation to the true probabilities

    Multidimensional Scaling: Projection method to reduce the number of attributes with main focus on keeping the ratio of distances.

    regards,

    Steffen