Community & Support
Learn
Marketplace
Discussions
Categories
Discussions
General
Platform
Academic
Partner
Regional
User Groups
Documentation
Events
Altair Exchange
Share or Download Projects
Resources
News & Instructions
Programs
YouTube
Employee Resources
This tab can be seen by employees only. Please do not share these resources externally.
Groups
Join a User Group
Support
Altair RISE
A program to recognize and reward our most engaged community members
Nominate Yourself Now!
Home
Discussions
Community Q&A
Minority Classes in Classification
TobiasMalbrecht
New message posted in the sourceforge forum at
http://sourceforge.net/forum/forum.php?thread_id=2092429&forum_id=390413
:
Hi -
I'm a newbie on the list so apologies if this has been delt with before.
Is there a way to oversample minority classes (or undersample majority data) so that a dataset is balanced before using a learner?
Thanks,
- Mark
Find more posts tagged with
AI Studio
Classification
Accepted answers
All comments
TobiasMalbrecht
Hi Mark,
a sampling algorithm that generates a fixed label distribution through sampling has not been yet implemented in RapidMiner. However, we recently implemented an operator [tt]EqualLabelWeighting[/tt] which has roughly the same effect as it generates example weights and sets these weights so that all label values (classes) are equally weighted in the example set. Of course the subsequent learner has to be capable of using example weights. Otherwise the former equal label weighting is ignored.
Hope that helps!
Regards,
Tobias
keith
Hi,
I was searching to an answer on how to oversample an underrepresented portion of the data, and came across this previous question on the same topic.
I wanted to see if there had been any new features in RM 4.2 that make enable oversampling. If not, is it possible to somehow use the WEKA function "weka.filters.supervised.instance.Resample ", which appears to do it?
Thanks,
Keith
IngoRM
Hi Keith,
beside the mentioned EqualLabelWeighting there are no new sampling operators for over- and undersampling, sorry. Since basically all learning schemes in RM support weighted examples and methods like threshold variations (in the postprocessing group) and cost sensitive learning are supported, I actually don't miss those methods but anyway: I will add them to our todo list. Of course you are also free to extend RM with this functionality by yourself.
Cheers,
Ingo
Quick Links
All Categories
Recent Discussions
Activity
Unanswered
日本語 (Japanese)
한국어(Korean)
Groups