Show prevalence of largest class in Performance (Classification) and similar operators
Tripartio
New Altair Community Member
When doing classification tasks, I normally use the prevalence (frequency) of the largest (modal) class as the naïve benchmark against which to compare if a single model is useful or not. For example, if my label is binary yes and no, with yes comprising 9% of the dataset and no comprising 91%, then I would expect the accuracy of a model to be at least 91%. If not, the model is no better than naively assigning all predictions to the larger class. The same logic applies for multiple categories (e.g. three or four classes for prediction). For example, if there were three classes A, B and C distributed 30%, 40% and 30%, then the prevalence of the largest class (B) would be 40%.
My request is that the Performance (Classification) and Performance (Binominal Classification) operators would add this as an option for criteria that they output. I am not sure, but I think the formal name for this measure is "prevalence of largest class" (c.f. https://en.wikipedia.org/wiki/Prevalence and https://en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion. Because the calculation is so simple, I hope it would be easy to implement. Yet having this handy as an output option would be more convenient than pulling out a calculator each time, which is what I have to do now.
Tagged:
0
Best Answer
-
Hi Chitu,it is of course possible to add new performance measures to the operators. I can of course open a ticket for this feature request, but please do not expect this to be solved in the next weeks. As you know RapidMiner has release schedules and it is not likely this will be of top priority for us.Also i ask the question: How is this a performance measure? Isn't this a constant value for each data set? Don't you want to have something like accuracy-prevalence or so? So how many percentage points are you above the prevalence?In any case, you can easily use custom operators to build yourself your own operator calculating prevalence [without any coding]Best,Martin
5
Answers
-
Hi,I usually use Cohen's Kappa for this? https://en.wikipedia.org/wiki/Cohen's_kappaThis is basically 'How much am I better than the default classification'.
The other thing I am frequently doing is to calculate the accuracy/ROI of a default model. The default model maybe the 'naive' prediction of predicting the majority class. Have a look at the Default Model operator for it.Best,Martin
0 -
Hi Martin,The accuracy of the Default Model set to the "mode" indeed gives me exactly what I am looking for. But this means a completely different operator and half a process to give me just one number that I need every time that I run a classification. So, as I said, the calculation is very simple; my request is to make it readily accessible right where I need it.I tried the kappa (Cohen's kappa), but I have no idea what that is supposed to tell me in my case. Could you please clarify how its interpretation would answer my question of if a given model is better than the accuracy of the default model of the mode?Regards,Chitu
0 -
Hi @Tripartio ,i totally understand your point and what you are looking for. Have a look at https://stats.stackexchange.com/questions/82162/cohens-kappa-in-plain-english . Maybe this also satisfies your needs.Best,Martin
0 -
Hi @mschmitz,Thanks for the Stack Exchange link that explains how to use and interpret Cohen's Kappa. That is helpful. However, while it does introduce me to a useful alternative to accuracy as a performance measure, it does not substitute my request for a benchmark for which to evaluate what is a useful model. (The best answer provided admits that there is no objective standard of what is a good kappa; it can only be used to compare two or more models, not to compare a model with itself.)So, my request for adding prevalence of largest class as an option in Performance (Classification) remains.Would that be possible?Regards,Chitu
0 -
Hi Chitu,it is of course possible to add new performance measures to the operators. I can of course open a ticket for this feature request, but please do not expect this to be solved in the next weeks. As you know RapidMiner has release schedules and it is not likely this will be of top priority for us.Also i ask the question: How is this a performance measure? Isn't this a constant value for each data set? Don't you want to have something like accuracy-prevalence or so? So how many percentage points are you above the prevalence?In any case, you can easily use custom operators to build yourself your own operator calculating prevalence [without any coding]Best,Martin
5 -
Hi @Tripartio ,here is a process which takes a data set and calculates your prevalance. I am cheating a bit in the end to also call it correctly*. We can turn this into a Custom operator with like 5 clicks.Best,Martin*: Propably you dont want to do this...<?xml version="1.0" encoding="UTF-8"?><process version="9.8.000">
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="9.8.000" expanded="true" name="Process">
<parameter key="logverbosity" value="init"/>
<parameter key="random_seed" value="2001"/>
<parameter key="send_mail" value="never"/>
<parameter key="notification_email" value=""/>
<parameter key="process_duration_for_mail" value="30"/>
<parameter key="encoding" value="SYSTEM"/>
<process expanded="true">
<operator activated="true" class="retrieve" compatibility="9.8.000" expanded="true" height="68" name="Retrieve Titanic Training" width="90" x="45" y="34">
<parameter key="repository_entry" value="//Samples/data/Titanic Training"/>
</operator>
<operator activated="true" class="default_model" compatibility="9.8.000" expanded="true" height="82" name="Default Model" width="90" x="313" y="34">
<parameter key="method" value="mode"/>
<parameter key="constant" value="0.0"/>
<parameter key="attribute_name" value=""/>
</operator>
<operator activated="true" class="apply_model" compatibility="9.8.000" expanded="true" height="82" name="Apply Model" width="90" x="447" y="34">
<list key="application_parameters"/>
<parameter key="create_view" value="false"/>
</operator>
<operator activated="true" class="performance_classification" compatibility="9.8.000" expanded="true" height="82" name="Performance" width="90" x="581" y="34">
<parameter key="main_criterion" value="first"/>
<parameter key="accuracy" value="true"/>
<parameter key="classification_error" value="false"/>
<parameter key="kappa" value="false"/>
<parameter key="weighted_mean_recall" value="false"/>
<parameter key="weighted_mean_precision" value="false"/>
<parameter key="spearman_rho" value="false"/>
<parameter key="kendall_tau" value="false"/>
<parameter key="absolute_error" value="false"/>
<parameter key="relative_error" value="false"/>
<parameter key="relative_error_lenient" value="false"/>
<parameter key="relative_error_strict" value="false"/>
<parameter key="normalized_absolute_error" value="false"/>
<parameter key="root_mean_squared_error" value="false"/>
<parameter key="root_relative_squared_error" value="false"/>
<parameter key="squared_error" value="false"/>
<parameter key="correlation" value="false"/>
<parameter key="squared_correlation" value="false"/>
<parameter key="cross-entropy" value="false"/>
<parameter key="margin" value="false"/>
<parameter key="soft_margin_loss" value="false"/>
<parameter key="logistic_loss" value="false"/>
<parameter key="skip_undefined_labels" value="true"/>
<parameter key="use_example_weights" value="true"/>
<list key="class_weights"/>
</operator>
<operator activated="true" class="execute_script" compatibility="9.8.000" expanded="true" height="82" name="Execute Script" width="90" x="715" y="34">
<parameter key="script" value=" import com.rapidminer.operator.performance.*; PerformanceVector perf = input[0]; PerformanceCriterion c = perf.getCriterion(0) //operator.log(c),5) c.NAMES[0]="prevalence" // You can add any code here // This line returns the first input as the first output return perf;"/>
<parameter key="standard_imports" value="true"/>
</operator>
<operator activated="false" class="h2o:deep_learning" compatibility="9.8.000" expanded="true" height="103" name="Deep Learning" width="90" x="313" y="238">
<parameter key="activation" value="Rectifier"/>
<enumeration key="hidden_layer_sizes">
<parameter key="hidden_layer_sizes" value="50"/>
<parameter key="hidden_layer_sizes" value="50"/>
</enumeration>
<enumeration key="hidden_dropout_ratios"/>
<parameter key="reproducible_(uses_1_thread)" value="false"/>
<parameter key="use_local_random_seed" value="false"/>
<parameter key="local_random_seed" value="1992"/>
<parameter key="epochs" value="10.0"/>
<parameter key="compute_variable_importances" value="false"/>
<parameter key="train_samples_per_iteration" value="-2"/>
<parameter key="adaptive_rate" value="true"/>
<parameter key="epsilon" value="1.0E-8"/>
<parameter key="rho" value="0.99"/>
<parameter key="learning_rate" value="0.005"/>
<parameter key="learning_rate_annealing" value="1.0E-6"/>
<parameter key="learning_rate_decay" value="1.0"/>
<parameter key="momentum_start" value="0.0"/>
<parameter key="momentum_ramp" value="1000000.0"/>
<parameter key="momentum_stable" value="0.0"/>
<parameter key="nesterov_accelerated_gradient" value="true"/>
<parameter key="standardize" value="true"/>
<parameter key="L1" value="1.0E-5"/>
<parameter key="L2" value="0.0"/>
<parameter key="max_w2" value="10.0"/>
<parameter key="loss_function" value="Automatic"/>
<parameter key="distribution_function" value="AUTO"/>
<parameter key="early_stopping" value="false"/>
<parameter key="stopping_rounds" value="1"/>
<parameter key="stopping_metric" value="AUTO"/>
<parameter key="stopping_tolerance" value="0.001"/>
<parameter key="missing_values_handling" value="MeanImputation"/>
<parameter key="max_runtime_seconds" value="0"/>
<list key="expert_parameters"/>
<list key="expert_parameters_"/>
</operator>
<connect from_op="Retrieve Titanic Training" from_port="output" to_op="Default Model" to_port="training set"/>
<connect from_op="Default Model" from_port="model" to_op="Apply Model" to_port="model"/>
<connect from_op="Default Model" from_port="exampleSet" to_op="Apply Model" to_port="unlabelled data"/>
<connect from_op="Apply Model" from_port="labelled data" to_op="Performance" to_port="labelled data"/>
<connect from_op="Performance" from_port="performance" to_op="Execute Script" to_port="input 1"/>
<connect from_op="Execute Script" from_port="output 1" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
</process>
</operator>
</process>
0 -
Hi @mschmitz ,Thanks for the process; that is very interesting. Could you please point me to the documentation for how I would turn it into a custom operator? Actually, I did create the custom operator (I think), but I cannot figure out how to add it to a new process.In any case, thank you for submitting a feature request ticket for my original request. That is really what I would like--to have prevalence added to the list of options for display, rather than having a dedicated operator just for that.Regards,Chitu0
-
Hi @mschmitz,To broaden this request, perhaps RapidMiner could consider a way to let users add custom measures to the Performance operators. I'm thinking of something like the functionality in the Generate Attributes operator, which would let the user write the expression formula for the operator they want. Without something like that, people like me will probably keep on asking for our own preferred measures to be added to the list.Just an idea.Regards,Chitu0
-
Hi,please check: https://community.rapidminer.com/discussion/56338/tutorial-for-creating-custom-operators for custom operators.And there is a way to build custom measures. The Extract Performance operator allows you to define any performance measure you want.Best,Martin
0