A program to recognize and reward our most engaged community members
haddock wrote:Actually I'm not convinced that the deductive and inductive reasoning mesh so easily. RM is most definitely an induction tool, and as it says in the good book ( http://en.wikipedia.org/wiki/Inductive_reasoning ) "Inductive reasoning is deductively invalid", hence my dilemma!
The First Certainty Principle: C~ 1/K; Certainty is inversely proportional to knowledge.A person who really understands data and analysis will understand all the pitfalls and limitations, and hence be constantly caveating what they say. Somebody who is simple, straightforward, and 100% certain usually has no idea what they are talking about.
Back to the topic:As stated before, I would analyze the properties of the system to find out what has happend (as I still believe, the classes have been mixed up internally). However, if we assume that the system are black boxes, the problem is more complicated (and very fascinating).
I'm back in France now so Bonjour tout le monde!
I can't help feeling that it is interesting that the same learning process, on the same attribute premises, produces between 30% and 65% accuracy, when 50% is the default expectation in binominal classification. Perhaps an optimiser should maximise the accuracy distance from 50%, and a model applier should have a negation parameter?
Perhaps an optimiser should maximise the accuracy distance from 50%, and a model applier should have a negation parameter?
So: If your black box learning algorithmn is "stable" , i.e. low variance across all results of an e.g. crossvaldidation, the value of the parameter should always be the same and so I would use this negotian-switch on my application set. If this is not true, i.e. the ratio of parameter true / false is points not clearly in one direction (95 %, 99 % how much safety do you want ?), I would not use such a switch.