🎉Community Raffle - Win $25

An exclusive raffle opportunity for active members like you! Complete your profile, answer questions and get your first accepted badge to enter the raffle.
Join and Win

Minority Classes in Classification

User: "TobiasMalbrecht"
New Altair Community Member
Updated by Jocelyn
New message posted in the sourceforge forum at http://sourceforge.net/forum/forum.php?thread_id=2092429&forum_id=390413:


Hi -

I'm a newbie on the list so apologies if this has been delt with before.

Is there a way to oversample minority classes (or undersample majority data) so that a dataset is balanced before using a learner?

Thanks,

- Mark

Find more posts tagged with

Sort by:
1 - 3 of 31
    User: "TobiasMalbrecht"
    New Altair Community Member
    OP
    Hi Mark,

    a sampling algorithm that generates a fixed label distribution through sampling has not been yet implemented in RapidMiner. However, we recently implemented an operator [tt]EqualLabelWeighting[/tt] which has roughly the same effect as it generates example weights and sets these weights so that all label values (classes) are equally weighted in the example set. Of course the subsequent learner has to be capable of using example weights. Otherwise the former equal label weighting is ignored.

    Hope that helps!
    Regards,
    Tobias
    User: "keith"
    New Altair Community Member
    Hi,

    I was searching to an answer on how to oversample an underrepresented portion of the data, and came across this previous question on the same topic.

    I wanted to see if there had been any new features in RM 4.2 that make enable oversampling.  If not, is it possible to somehow use the WEKA function "weka.filters.supervised.instance.Resample ", which appears to do it?

    Thanks,
    Keith
    User: "IngoRM"
    New Altair Community Member
    Hi Keith,

    beside the mentioned EqualLabelWeighting there are no new sampling operators for over- and undersampling, sorry. Since basically all learning schemes in RM support weighted examples and methods like threshold variations (in the postprocessing group) and cost sensitive learning are supported, I actually don't miss those methods but anyway: I will add them to our todo list. Of course you are also free to extend RM with this functionality by yourself.

    Cheers,
    Ingo