Altair RISE
A program to recognize and reward our most engaged community members
Nominate Yourself Now!
Home
Discussions
Community Q&A
Combined SMOTE Operator
darkphoenix_isa
Hi there, i'm still new and exploring with Rapidminer. Currently i'm working on a project that consist of imbalance dataset. From some research paper, using combination of SMOTE with different selection algorithm might work well for imbalanced problem. I already found SMOTE operator in Rapidminer, but other selection algorithm like Tomek Link or ENN i still couldn't found it.
Is there RM operator for those?
Find more posts tagged with
AI Studio
Accepted answers
YYH
Thanks for sharing. Unfortunately RapidMiner do not have Smote with Tomek link or Edited Nearest Neighbors. You may like to integrate imbalanced learn library in "Execute Python" operator.
https://docs.rapidminer.com/latest/studio/operators/utility/scripting/execute_python.html
All comments
YYH
Thanks for sharing. Unfortunately RapidMiner do not have Smote with Tomek link or Edited Nearest Neighbors. You may like to integrate imbalanced learn library in "Execute Python" operator.
https://docs.rapidminer.com/latest/studio/operators/utility/scripting/execute_python.html
darkphoenix_isa
Thank you very much for your response. I'll explore this solution.
MartinLiebig
Hi
@darkphoenix_isa
,
i am the author of the operator. Can you maybe point me to some references showing the advantages? Maybe we can add it to the operator.
BR,
Martin
darkphoenix_isa
Dear Mr. Martin,
Thank you for your attention. I get reference for my problem based on this paper :
https://www.sciencedirect.com/science/article/pii/S0925231215015908
jacobcybulski
The main difference is that SMOTE aims at oversampling the minority class and Tomek-links aims at undersampling the majority class. It would be great to have both.
MartinLiebig
@jacobcybulski
do you maybe know how this relates to Kennard-Stone Sampling?
BR,
Martin
jacobcybulski
@mschmitz
I am not an expert on this, however my understanding is that KS algorithm aims to find two representative samples of your data set, e. g. for training and testing, by finding close pairs of data points and allocating each of them to these two separate partitions. TL however finds close pairs of the minority and majority class and then drops off the majority class points from those pairs. As a result we have better balanced sample and better separated.
MartinLiebig
This makes a lot of sense, thanks
@jacobcybulski
Quick Links
All Categories
Recent Discussions
Activity
Unanswered
日本語 (Japanese)
한국어(Korean)
Groups