Accuracy remains constant
chris_ml
New Altair Community Member
I want to optimize the parameters of the decision tree learner using EvolutionaryParameterOptimization.
That's my model:
lines of the generated log file:
the same. But I can't imagine that the number of correctly classified examples is always the same. Any
ideas what I'm dong wrong?
Chris
That's my model:
The strange thing is that the accuracy remains always (almost) the same. Here are the first
<?xml version="1.0" encoding="US-ASCII"?>
<process version="4.3">
<operator name="Root" class="Process" expanded="yes">
<operator name="CSVExampleSource" class="CSVExampleSource">
<parameter key="filename" value="example.csv"/>
<parameter key="label_name" value="label"/>
</operator>
<operator name="EvolutionaryParameterOptimization" class="EvolutionaryParameterOptimization" expanded="yes">
<list key="parameters">
<parameter key="DecisionTree.maximal_depth" value="[-1.0;10000.0]"/>
<parameter key="DecisionTree.minimal_leaf_size" value="[1.0;10000.0]"/>
<parameter key="DecisionTree.confidence" value="[1.0E-7;0.5]"/>
<parameter key="DecisionTree.minimal_size_for_split" value="[1.0;10000.0]"/>
<parameter key="DecisionTree.minimal_gain" value="[0.0;Infinity]"/>
</list>
<parameter key="population_size" value="10"/>
<operator name="IteratingPerformanceAverage" class="IteratingPerformanceAverage" expanded="yes">
<parameter key="iterations" value="3"/>
<operator name="XValidation" class="XValidation" expanded="yes">
<parameter key="sampling_type" value="shuffled sampling"/>
<operator name="DecisionTree" class="DecisionTree">
<parameter key="confidence" value="0.33223030703745715"/>
<parameter key="maximal_depth" value="6341"/>
<parameter key="minimal_gain" value="Infinity"/>
<parameter key="minimal_leaf_size" value="1424"/>
<parameter key="minimal_size_for_split" value="2961"/>
</operator>
<operator name="OperatorChain" class="OperatorChain" expanded="yes">
<operator name="ModelApplier" class="ModelApplier">
<list key="application_parameters">
</list>
</operator>
<operator name="ClassificationPerformance" class="ClassificationPerformance">
<parameter key="absolute_error" value="true"/>
<parameter key="accuracy" value="true"/>
<list key="class_weights">
</list>
<parameter key="classification_error" value="true"/>
</operator>
</operator>
</operator>
</operator>
<operator name="ProcessLog" class="ProcessLog">
<parameter key="filename" value="process3.log"/>
<list key="log">
<parameter key="iteration" value="operator.XValidation.value.iteration"/>
<parameter key="time" value="operator.XValidation.value.time"/>
<parameter key="deviation" value="operator.XValidation.value.deviation"/>
<parameter key="accuracy" value="operator.XValidation.value.performance1"/>
<parameter key="max_depth" value="operator.DecisionTree.parameter.maximal_depth"/>
<parameter key="max_leaf_size" value="operator.DecisionTree.parameter.minimal_leaf_size"/>
<parameter key="confidence" value="operator.DecisionTree.parameter.confidence"/>
</list>
<parameter key="persistent" value="true"/>
</operator>
</operator>
<operator name="PerformanceWriter" class="PerformanceWriter">
<parameter key="performance_file" value="final_performance.per"/>
</operator>
<operator name="ParameterSetWriter" class="ParameterSetWriter">
<parameter key="parameter_file" value="parameters.par"/>
</operator>
</operator>
</process>
lines of the generated log file:
Even when I replace the decision tree learned by a neural network, the accuracy values are always
# Generated by ProcessLog[com.rapidminer.operator.visualization.ProcessLogOperator]
# iteration time deviation accuracy max_depth max_leaf_size confidence
10.0 23.0 0.05129744389973656 0.7932203389830508 9839.0 1251.0 0.34154054826704505
10.0 23.0 0.05507822647211116 0.7932203389830507 6341.0 7767.0 0.32651644400635255
10.0 23.0 0.0392401250942013 0.793220338983051 3707.0 48.0 0.397380892497813
10.0 22.0 0.0726246958834861 0.7932203389830509 9008.0 5164.0 0.40415961507086146
10.0 25.0 0.03774755500223652 0.7932203389830509 7652.0 3992.0 0.27277836592326715
10.0 28.0 0.055078226472110144 0.7932203389830507 293.0 1424.0 0.03868159703154637
10.0 27.0 0.06287198979997247 0.7932203389830507 615.0 6926.0 0.12025785562389255
10.0 46.0 0.026037782196170964 0.7932203389830507 4846.0 1825.0 0.4842464140080626
the same. But I can't imagine that the number of correctly classified examples is always the same. Any
ideas what I'm dong wrong?
Chris
Tagged:
0
Answers
-
Hi Chris,
Are you really sure that you should have 'infinity' as your minimal split parameter? I'm trying to get my head round an infinite gain....
HAPPY NEW YEAR TO ALL!0 -
Hi,
OK, maybe infinity as minimal gain is little bit too large. :-)
However, this seems not to be the problem. After reducing this
parameter to max. 100, I still get same results where the accuracy
is remaining (almost) constant. Any other ideas what might be wrong?
By the way, what exactly is the definition of an "minimal size of
a leaf" for the DecisionTree operator? This is used in the parameter
"minimal_size_for_split". Is it the value computed by the criterion?
Also happy new year to all.
Chris0 -
Hi Chris,
as far as I can see from your posted log, the parameters have never been within a region, where changes might occur.
One possibility for constant accuracy is the collapsing of the models into a default model. Perhabs you smaller class is never discovered by the learner?
Greetings,
Sebastian0 -
Hi Sebastian,
Could you explain that in more detail? I didn't fully catch what you mean by "regions, where changes might occur".
as far as I can see from your posted log, the parameters have never been within a region, where changes might occur.
When can it happen that RapidMiner collapses a model? Can this be somehow recognized?
One possibility for constant accuracy is the collapsing of the models into a default model. Perhabs you smaller class is never discovered by the learner?
I've now simplified my model:
and here is the complete log file:
<?xml version="1.0" encoding="US-ASCII"?>
<process version="4.3">
<operator name="Root" class="Process" expanded="yes">
<operator name="CSVExampleSource" class="CSVExampleSource">
<parameter key="filename" value="examples.csv"/>
<parameter key="label_name" value="label"/>
</operator>
<operator name="EvolutionaryParameterOptimization" class="EvolutionaryParameterOptimization" expanded="yes">
<list key="parameters">
<parameter key="DecisionTree.maximal_depth" value="[-1.0;100.0]"/>
<parameter key="DecisionTree.minimal_leaf_size" value="[1.0;100.0]"/>
<parameter key="DecisionTree.confidence" value="[1.0E-7;0.5]"/>
<parameter key="DecisionTree.minimal_size_for_split" value="[1.0;100.0]"/>
<parameter key="DecisionTree.minimal_gain" value="[0.0;100.0]"/>
</list>
<parameter key="population_size" value="10"/>
<operator name="IteratingPerformanceAverage" class="IteratingPerformanceAverage" expanded="yes">
<parameter key="iterations" value="3"/>
<operator name="XValidation" class="XValidation" expanded="yes">
<parameter key="sampling_type" value="shuffled sampling"/>
<operator name="DecisionTree" class="DecisionTree">
<parameter key="confidence" value="0.397380892497813"/>
<parameter key="maximal_depth" value="36"/>
<parameter key="minimal_gain" value="29.447796060205512"/>
<parameter key="minimal_leaf_size" value="15"/>
<parameter key="minimal_size_for_split" value="27"/>
</operator>
<operator name="OperatorChain" class="OperatorChain" expanded="yes">
<operator name="ModelApplier" class="ModelApplier">
<list key="application_parameters">
</list>
</operator>
<operator name="ClassificationPerformance" class="ClassificationPerformance">
<parameter key="absolute_error" value="true"/>
<parameter key="accuracy" value="true"/>
<list key="class_weights">
</list>
<parameter key="classification_error" value="true"/>
</operator>
</operator>
</operator>
</operator>
<operator name="ProcessLog" class="ProcessLog">
<parameter key="filename" value="process.log"/>
<list key="log">
<parameter key="iteration" value="operator.XValidation.value.iteration"/>
<parameter key="time" value="operator.XValidation.value.time"/>
<parameter key="deviation" value="operator.XValidation.value.deviation"/>
<parameter key="accuracy" value="operator.XValidation.value.performance1"/>
<parameter key="max_depth" value="operator.DecisionTree.parameter.maximal_depth"/>
<parameter key="max_leaf_size" value="operator.DecisionTree.parameter.minimal_leaf_size"/>
<parameter key="confidence" value="operator.DecisionTree.parameter.confidence"/>
</list>
<parameter key="persistent" value="true"/>
</operator>
</operator>
</operator>
</process>
So, you can see that the accuracy is still constant. All parameters to be optimized seem also to be correctly
# Generated by ProcessLog[com.rapidminer.operator.visualization.ProcessLogOperator]
# iteration time deviation accuracy max_depth max_leaf_size confidence
10.0 322.0 0.05129744389973656 0.7932203389830508 98.0 13.0 0.34154054826704505
10.0 316.0 0.05507822647211116 0.7932203389830507 63.0 78.0 0.32651644400635255
10.0 315.0 0.0392401250942013 0.793220338983051 36.0 1.0 0.397380892497813
10.0 344.0 0.0726246958834861 0.7932203389830509 90.0 52.0 0.40415961507086146
10.0 319.0 0.03774755500223652 0.7932203389830509 76.0 41.0 0.27277836592326715
10.0 321.0 0.055078226472110144 0.7932203389830507 2.0 15.0 0.03868159703154637
10.0 323.0 0.06287198979997247 0.7932203389830507 5.0 70.0 0.12025785562389255
10.0 323.0 0.026037782196170964 0.7932203389830507 48.0 19.0 0.4842464140080626
10.0 322.0 0.033728387698533355 0.7932203389830508 62.0 86.0 0.28231036291700745
10.0 326.0 0.049588945214671463 0.7932203389830508 76.0 41.0 0.2588474520496027
10.0 354.0 0.04900621116881689 0.7932203389830509 90.0 52.0 0.394739905705027
10.0 329.0 0.05240550791098398 0.7932203389830509 2.0 15.0 0.024606579573607863
10.0 331.0 0.04660246469446598 0.7932203389830509 76.0 41.0 0.2749646967881053
10.0 326.0 0.06054430881183808 0.7932203389830508 63.0 78.0 0.03277810211166991
10.0 327.0 0.03774755500223652 0.7932203389830509 2.0 15.0 0.33223030703745715
10.0 332.0 0.04598189819068104 0.7932203389830508 2.0 15.0 0.05108435729123231
10.0 333.0 0.05611167917710849 0.7932203389830507 36.0 1.0 0.4118684184539058
10.0 340.0 0.03924012509420413 0.7932203389830509 76.0 41.0 0.27811571436906546
10.0 334.0 0.046602464694467174 0.7932203389830508 2.0 15.0 0.03330102480728759
10.0 330.0 0.055078226472110144 0.7932203389830508 2.0 15.0 0.04044172165402028
10.0 337.0 0.039965512279838404 0.7932203389830507 90.0 52.0 0.40559465185597704
10.0 337.0 0.04137815462960765 0.7932203389830508 63.0 78.0 0.32377306309346343
10.0 338.0 0.039965512279837016 0.7932203389830508 2.0 15.0 0.06783963315527239
10.0 339.0 0.042744136314977955 0.7932203389830509 2.0 15.0 0.055966407234910774
10.0 333.0 0.03012947260107371 0.7932203389830507 76.0 41.0 0.27478141583203897
10.0 333.0 0.03697868513435921 0.7932203389830508 76.0 41.0 0.2709776940166765
10.0 351.0 0.03286562615197772 0.7932203389830507 2.0 15.0 0.04512058551300815
10.0 343.0 0.06195141316100628 0.793220338983051 76.0 41.0 0.28224938856896803
10.0 354.0 0.05712643914017982 0.7932203389830508 36.0 1.0 0.3985242094663377
10.0 344.0 0.04900621116881689 0.7932203389830508 76.0 41.0 0.27277836592326715
10.0 337.0 0.038501073530848126 0.7932203389830509 90.0 52.0 0.40415961507086146
10.0 346.0 0.0651165176688085 0.7932203389830509 2.0 15.0 0.03868159703154637
10.0 347.0 0.05073433744778388 0.7932203389830508 76.0 41.0 0.27277836592326715
10.0 348.0 0.04471493545177314 0.7932203389830508 63.0 78.0 0.03868159703154637
10.0 376.0 0.03539086952172904 0.7932203389830509 2.0 15.0 0.32651644400635255
10.0 359.0 0.036193485600110376 0.7932203389830507 2.0 15.0 0.03868159703154637
10.0 376.0 0.04067796610169709 0.7932203389830508 36.0 1.0 0.397380892497813
10.0 370.0 0.061016949152542 0.7932203389830509 76.0 41.0 0.27277836592326715
10.0 368.0 0.07496048944913757 0.7932203389830509 2.0 15.0 0.03868159703154637
10.0 367.0 0.034569623820971486 0.7932203389830507 36.0 1.0 0.2850651570329497
10.0 371.0 0.05295084526038379 0.7932203389830509 76.0 41.0 0.40325282749551083
10.0 362.0 0.04341101177920808 0.7932203389830508 76.0 41.0 0.2953841219126293
10.0 415.0 0.04206669032539301 0.7932203389830508 76.0 41.0 0.27532383744770006
10.0 450.0 0.04471493545177314 0.7932203389830509 36.0 1.0 0.3866757225955784
10.0 358.0 0.04274413631498055 0.7932203389830507 2.0 15.0 0.03845641301147732
10.0 372.0 0.03456962382096667 0.7932203389830509 2.0 1.0 0.40548652887913794
10.0 346.0 0.04721487551588266 0.7932203389830507 36.0 15.0 0.03225434093850507
10.0 361.0 0.018254795956391408 0.7932203389830508 2.0 15.0 0.04556657107596725
10.0 360.0 0.05762711864406968 0.7932203389830507 2.0 15.0 0.07011796273391388
10.0 377.0 0.061951413161007184 0.7932203389830509 36.0 1.0 0.40087800008364416
10.0 362.0 0.04471493545177314 0.7932203389830509 76.0 41.0 0.27915934844746276
10.0 378.0 0.06511651766880934 0.7932203389830508 76.0 41.0 0.28168690337376767
10.0 349.0 0.03774755500223946 0.7932203389830508 2.0 15.0 0.07293158627660976
10.0 381.0 0.05712643914017982 0.7932203389830508 36.0 1.0 0.3947367937506493
10.0 366.0 0.035390869521732184 0.7932203389830508 2.0 15.0 0.026593489652135923
10.0 380.0 0.0591037144886865 0.7932203389830509 2.0 15.0 0.032553155159174216
10.0 368.0 0.0534906231798622 0.7932203389830509 76.0 41.0 0.28000872950589595
10.0 366.0 0.05958778247880523 0.7932203389830508 2.0 15.0 0.03494975107057588
10.0 353.0 0.06467384416386161 0.7932203389830508 36.0 1.0 0.4276158956902939
10.0 386.0 0.03106830979631423 0.7932203389830508 36.0 1.0 0.27277836592326715
10.0 371.0 0.0524055079109861 0.7932203389830507 76.0 41.0 0.397380892497813
10.0 388.0 0.04406779661017154 0.7932203389830508 76.0 41.0 0.27277836592326715
10.0 372.0 0.041378154629604966 0.7932203389830509 76.0 41.0 0.27277836592326715
10.0 372.0 0.04598189819068104 0.7932203389830508 36.0 1.0 0.397380892497813
10.0 381.0 0.04721487551588031 0.7932203389830507 2.0 15.0 0.03868159703154637
10.0 393.0 0.07061920900338971 0.7932203389830508 2.0 1.0 0.397380892497813
10.0 376.0 0.055597354124937784 0.7932203389830509 36.0 15.0 0.03868159703154637
10.0 395.0 0.049006211168820285 0.7932203389830507 2.0 15.0 0.03868159703154637
10.0 357.0 0.0495889452146737 0.7932203389830507 2.0 15.0 0.06783963315527239
10.0 376.0 0.036193485600105775 0.7932203389830509 76.0 15.0 0.037015432195162346
10.0 378.0 0.024910065180848328 0.7932203389830508 2.0 41.0 0.2807067926588812
10.0 398.0 0.05762711864406775 0.7932203389830508 76.0 41.0 0.2894590092601272
10.0 382.0 0.060067949649726594 0.7932203389830509 76.0 41.0 0.2879003972663676
10.0 396.0 0.042066690325395645 0.7932203389830508 76.0 41.0 0.02664175488889414
10.0 361.0 0.0281580469929397 0.7932203389830509 2.0 15.0 0.3007605640107514
10.0 382.0 0.05559735412493978 0.7932203389830507 36.0 1.0 0.39735325254783843
10.0 382.0 0.033728387698535 0.7932203389830507 36.0 1.0 0.4257667803880479
10.0 384.0 0.0561116791771075 0.7932203389830509 36.0 1.0 0.049004769840074784
10.0 384.0 0.061016949152542 0.7932203389830509 36.0 15.0 0.3948290429670914
10.0 364.0 0.04274413631498185 0.7932203389830507 2.0 15.0 0.03672310007521456
10.0 364.0 0.037747555002240925 0.7932203389830508 36.0 1.0 0.39710883071270425
10.0 386.0 0.03850107353085101 0.7932203389830508 36.0 15.0 0.05950635065611809
10.0 387.0 0.03774755500223799 0.7932203389830508 76.0 41.0 0.29086202403328454
10.0 388.0 0.055078226472110144 0.7932203389830508 76.0 41.0 0.25978959222770964
10.0 425.0 0.0440677966101665 0.793220338983051 36.0 1.0 0.4073614132304591
10.0 372.0 0.01976593862659389 0.7932203389830509 36.0 1.0 0.4360241019919045
10.0 420.0 0.04341101177920552 0.7932203389830509 76.0 41.0 0.2650590211401542
10.0 399.0 0.04958894521467034 0.7932203389830509 2.0 15.0 0.03541818035342996
10.0 400.0 0.042744136314977955 0.7932203389830509 76.0 41.0 0.2712751604859657
10.0 400.0 0.03539086952173375 0.7932203389830507 76.0 15.0 0.03868159703154637
10.0 381.0 0.06980088231177763 0.7932203389830507 2.0 41.0 0.27915934844746276
10.0 387.0 0.02491006518084387 0.7932203389830509 76.0 41.0 0.27915934844746276
10.0 395.0 0.04900621116881802 0.7932203389830508 76.0 41.0 0.27915934844746276
10.0 396.0 0.039965512279838404 0.7932203389830507 76.0 41.0 0.03868159703154637
10.0 396.0 0.061485956431246415 0.7932203389830507 2.0 15.0 0.27915934844746276
10.0 419.0 0.036193485600105775 0.7932203389830509 36.0 1.0 0.397380892497813
10.0 371.0 0.05611167917710948 0.7932203389830508 36.0 1.0 0.4276158956902939
10.0 400.0 0.03924012509420555 0.7932203389830508 36.0 1.0 0.03868159703154637
10.0 399.0 0.05507822647210813 0.7932203389830509 36.0 15.0 0.397380892497813
evaluated. Is this behavior really OK or do you have any other
suggestions what might went wrong?
Regards,
Chris0 -
Hi Chris,
first at all:
What you did post here is not your modell, its your process. Models are the results of learning algorithms like trees, rulesets and so on.
The most basic model is called "default model". It means, that it always predicts the same class, which is the most frequent class in the trainingset, independent from the values of the current example.
All learning algorithms might produce models equivalent to this default models. For example this is the case if a tree only consists of one node. Another example would be an SVM hyperplane laid far beyond the data itself.
The learning process and hence the building process of the models are often controlled by parameters. Thats why you are optimizing this parameters, because you want to optimize the learning performance.
BUT not every distinct value will lead to a different model. Many values create the same result. So if the model building will change if the parameter value a exceeds 6, 7, 8, 1000 or greater have not to change it again.
For example the maximal depth of a tree is not important if the generation never reaches this limit. So 100, 100000, or 782379821412 will not result in a different tree than 15 if the maximal depth ever generate during leraning is 15.
Hope this will get you some ideas.
Greetings,
Sebastian0 -
Hi Sebastian,
thank you for your detailed answer.This was very helpful. I didn't know that the "maximal" parameters
are just upper bounds for the parameter values. My assumption was that RapidMiner is always trying to
learn a model that tries to use the given values, i.e. RM learns a model with the maximal (or minimal)
parameter values specified.
So, parameter optimization requires some deeper knowledge on the model to be optimized, right? I mean
when "arbitrary" parameter values are evaluated, it might happen, as in my case, that many combinations
don't make sense since they create the same model, thus their evaluation is wasted time. Can you give me
any advices how parameter optimization should be effectively performed? Does it make sense to use just
small values for "maximal" parameters since they will more likely lead to different results? Or was my approach
OK and I can infer from the results that the learned DecisionTrees is not sensitive to my learning data, i.e. it
will mostly end up in the same accuracy?
Regards,
Chris0