Altair RISE
A program to recognize and reward our most engaged community members
Nominate Yourself Now!
Home
Discussions
Community Q&A
"'Neural networks' with interpretability"
ruser
By going through some literature on the usage of 'Neural Networks' for the Data mining, I understood that they lack in the 'Interpretability' factor.
This means, we cannot interpret how and which input attributes have influenced the output attribute (for numeric prediction, for example).
With the other algorithms, we have clear set of Rules, Trees, Formulas defined. But, the Neural Networks seem to face this specific problem.
Is that correct?
I could see that there are some research done to come up with Neuro-Fuzzy methods to solve that problems.
Now, what I would like to know is the support for such operators from the Rapidminer. Can we use the 'Neural Networks' operator and still have the capability of interpreting the results? How do we achieve that with 'Rapidminer'?
Kindly elaborate with some details. Thanks!
Find more posts tagged with
AI Studio
Deep Learning + Neural Nets
Accepted answers
All comments
land
Hi,
thanks for the invitation to elaborate on some details, but in fact there aren't any: RapidMiner provides state of the art Neural Nets, but none of the fuzzy kind. I personally doubt that this will solve the interpretability problem at all, it seems to me like painting the walls with water proof color, because you forgot to build a roof on top of your house. Why not simply build a roof or apply something like SVM or a linear regression with generated features? But I don't want to start a religious war between fans of neural nets and followers of the SVM, it's simply my experience, that the neural net never had performed better than one of the other algorithms.
Greetings,
Sebastian
Quick Links
All Categories
Recent Discussions
Activity
Unanswered
日本語 (Japanese)
한국어(Korean)
Groups