Variable importance in deep learning and what to do with it?

User: "fstarsinic"
New Altair Community Member
Updated by Jocelyn
What does one learn from variable importance?
What might one change in the model, based on what they see?

If you see variables that seem "very important" at the top that you know are not important, does that mean it's a candidate for "attribute removal" or "weight reduction"  or...?

Example: I have a few category attributes that are hierarchical.   if the upper(est) parent category has high importance, does it really need to be there at all, if the lower categories are the ones that really tell the story?  Seems to me it's telling me i can get rid of that feature/attribute and that perhaps the model is relying too much on the upper level category to make predictions. 

Yes, i know I should try removing it to see what happens but in general I'm wondering how should variable importance be interpreted?



Find more posts tagged with