Will the same set of variables perform best for all classifying methods?

theWay
theWay New Altair Community Member
edited November 2024 in Community Q&A
Suppose that out of a large set, S, of attributes that describe the target variable M, there exists a subset Z of S that optimize the performance for say a decision tree model, does that mean that Z will optimize performance for all other techniques, such as K-NN , or Bayes, or SVM?
Tagged:

Answers

  • haddock
    haddock New Altair Community Member
    Hi there,

    The short answer is no.

    Here's a longer one. Classifiers have different attribute footprints; some accept numbers, while others can only do binominal classification, and so on.  The common ground is rather small, and for good reason. From a toolkit point of view you need only one tool to do a job; functionality overlap gains little. That means that your attribute subset Z for decision trees might not even work on other operators, for example a polynominal label might go brill on ID3, but not directly into SVM validation.

    Best wishes

    H