deep learning
Answers
-
0
-
@varunm1 @sgenzer
it works but cross validation doesnt show accuracy or kappa
please help me to solve it
thank you0 -
look at the process please0
-
@varunm1
About "getting a label with a single sample" is possible in predicting of cancer because cancer cell is unique between cells
0 -
@varunm1 @sgenzer mschmitz
please look at the screenshot it doesnt calculate kappa or accuracy.
please help me to solve that0 -
varunm1 Hi
I will try it now0 -
varunm1
yes it works thank you
I did all the points that you told me about data
but the result is fun some of the algorithm results changed to better some of the is not. Logically the result is fun
any way thank you very much my kind friend1 -
One last thing, cross validation results can be lower (bad) than your random split results. The reason is it tests and trains on all data and averages the performance. If there is some bad data, that will reduce the performance. For more details, read about cross validation and you will get to know it. But cross validation will give you reliable results.
Thanks0 -
0
-
0
-
hi @mbs I'd recommend this from the Academy: https://academy.rapidminer.com/learn/video/validating-a-model
You could also try searching the community...there is a rather well-written article on this topic from March 4th.
1 -
0
-
Search for answers in this community or academy. Finally, Google is your best friend. Try searching until you find something you can understand because we cannot know which is the best one for you. Read different things, and you will get to know easily. As our time is limited, we recommend you try hard first and then ask us questions in case you have any. This the way we learn as well.0
-
@varunm1
according to this link:
https://community.rapidminer.com/discussion/54621/cross-validation-and-its-outputs-in-rm-studio
because of the 2000 number of excel that I have ( large data) split data work better than cross validation.
During the test I understand that if I combine 3 or 4 algorithm and use cross validation the result is better than split data.
Regards
mbs1 -
Yep, you can select whatever works in your case. If you ask me, 2000 samples is a normal data and I cross-validate data with 100000 samples to get confident results. Again, this might be subjective. Getting good performance and reliable performance are two different things. Try different things and see what is good for your thesis.0
-
@varunm1
Thank you for all the points that you mentioned.
With your perfect suggestion my thesis doesnt have any problem and I'm sure that I will pass it easily.
Regards
mbs
0 -
0
-
For reason 2, you need to start from smaller networks and then build more complex networks based on data and test performance. There is no use of building networks with more hidden layers when a simple neural network can achieve your task.
For reason 3, use AUC values as performance metric instead of accuracy.
Reason 2: The complex algorithms overfit some times (depends on data). A deep learning algorithm is the one which has more hidden layers. In my statement, I am saying to train and test a model with a single hidden layer first and then note the performance parameters like accuracy, kappa, etc. Then you can build another model with more hidden layers and see the performances. If your simple model is giving the best performance there is no need to use a complex model with multiple hidden layers.
@varunm1
These are your suggestion but I couldnt understand them and they are important. so please make an example with them and share your xml.
Thank you very much
mbs0 -
Sorry @mbs, I am swamped working on paper. I can explain it to you. I know you got confused with "smaller network" and "complex network". A neural network can have multiple layers. So, a simple neural network, in my view, is with one hidden layer and a few neurons in it. If you increase the number of hidden layers with a different number of neurons and different activation functions the network is becoming more complex. You can build models with a different number of layers with a neural network operator or deep learning operator or deep learning extension in Rapidminer. I recommend you get a general understanding of neural networks and deep learning (deep neural networks) and try with relevant operators in rapidminer.
If you have any specific question or need clarification I can help in that case but building models take time. I recommend you watch videos and tutorials from rapidminer or any other source that makes you understand easily.1 -
thank you again0
-
Hi
@varunm1
According to your previous help please tell me how can I use more than 1 algorithm and combine them then use cross validation without using group model?
According to the points that @varunm1 said if we have a data with label we dont need to separate dataset in to traning and testing. And also RM with cross validation is able to separte it automatically to the train and test parts And for the testing part it will not use the label like the training part.
Are these points correct?
Thank you0 -
Hello @mbs
Which models are you trying to combine? I am not sure if there is a way to combine models without group model operator.
Yes, Cross-validation operator will divide your data into multiple subsets that are used for training and testing of algorithms. Yes, testing is done without labels as the trained model is trying to predict the output for a given sample.
Thanks0 -
@varunm1
Thank you for your great answer again.
the algorithms are:
1. deep learning
2. j48
3. random forest
4. knn
5. gradient boosted tree
6. neural network
7. svm
Thank you for the time that you spend on my questions
0 -
@mbs
Are you trying to combine all these models into a single model? (or) are you trying to get cross-validation performance of each model separately?
I never tried combining these many models into a single model. You can try using group models but not sure how it works.0 -
@varunm1
the result of them are perfect.The accuracy of them is around 99.5. this is "Ensemble learning".
Ensemble methods are meta-algorithms that combine several machine learning techniques into one predictive model in order to decrease variance (bagging), bias (boosting), or improve predictions (stacking).
look at this link please.
https://en.wikipedia.org/wiki/Ensemble_learning
1 -
Yep, ensemble models work but you should be careful in analyzing higher performance. For this, you need to set aside some data from testing after the model is trained and tested using cross-validation. If the performance of this hold out a dataset is good then your model might be good.
PS: Cross-validation reduces overfitting but complex models tend to overfit even in cross-validation so we should be careful in analyzing very good results.0 -
>Do you mean that I have to separate dataset in to traning and testing?0
-
Sorry, don't get confused. What I am saying is a validation process we do when we observe high performances like 99 percent accuracy etc., We split the dataset into 0.8 to 0.2 ratio and cross-validate the 0.8 portions of the dataset and then we connect the model output of cross-validation operator to test the 0.2 percent dataset. Now we can have a performance from cross-validation and hold out (0.2) dataset as well.
If you think this is confusing you can go with your current results.0 -
0