Classification accuracy (stacking)

User: "keesloutebaan"
New Altair Community Member
Updated by Jocelyn
Hey there,

I am currently working on a polynomial classification project. The goal is to reach the highest possible accuracy.
I found out that the 'deep learning' and the 'gradient boosted trees' operator work really well.
Now, I want to find out if stacking can improve the performance. However, I tried a few combinations but every time, the performance drops.
Can someone maybe tell me if there are any important rules to take into account when it comes to stacking? When is it helpful and what settings are then required?
Thanks a lot

Find more posts tagged with

Sort by:
1 - 1 of 11
    User: "BalazsBaranyRM"
    New Altair Community Member
    Accepted Answer
    Hi,

    the idea behind ensemble models like Stacking is that they improve the performance of non-perfect learners. But they can also create more complex, overfitted models. 

    Both GBT and to some extent Deep Learning are already complex ensemble models.

    Stacking could only improve upon them if they had some systematic bias or error source, the errors were different, and the stacking model could somehow identify the right model for most cases that are predicted differently.

    If any of these assumptions is not true, as is likely in your case, stacking or another model combination won't improve the result.

    Regards,

    Balázs