Altair RISE
A program to recognize and reward our most engaged community members
Nominate Yourself Now!
Home
Discussions
Community Q&A
Stacking: probabilities instead of label?
spitfire_ch
Hi,
I was experimenting with stacking and noticed that the only output the base learners provide to the "stacking model learner" is the final label. Don't most base learners contain more information than just the final label? More specifically, couldn't one pass probabilities instead of the final label?
E.g. if the leaf of choice in a decision tree contains 3 positive and 2 negative cases, pass 3/5 instead of P. That way, each "guess" by the base models would automatically be weighted. If model 1 is sure about a result, the others is not (and predicts a different outcome), then the prediction of model A would be favored.
Best regards
Hanspeter
Find more posts tagged with
AI Studio
Accepted answers
All comments
IngoRM
Hi,
I fully agree that passing the confidences in addition or even instead could definitely improve the quality of the complete model. I suppose that the original paper only passed the predictions and we probably sticked to this description. In order to not break compatibility and allow those different options, I would suggest to add a new parameter which allows to choose between "predictions only", "confidences only", or "predictions and confidences".
Thanks for sending this in. Cheers,
Ingo
spitfire_ch
Hi Ingo,
thanks for your reply. Making this optional totally makes sense. That would also allow to directly investigate whether predictions vs. confidence do make a difference, and to do some more tweaking of the model in development.
Cheers,
Hanspeter
Quick Links
All Categories
Recent Discussions
Activity
Unanswered
日本語 (Japanese)
한국어(Korean)
Groups