update model with xval
Hi everyone 
i'm currently try to predict sentiment within text. Therefor i use k-nn witch takes a classified dataset (with label) to create a model. Then i apply the model on unclassified data.When i validate the model with xval i archive a accuracy depends between 60 and 70. Not so bad for now!
But how could i train (update) the model out of the xval mistakes?

i'm currently try to predict sentiment within text. Therefor i use k-nn witch takes a classified dataset (with label) to create a model. Then i apply the model on unclassified data.When i validate the model with xval i archive a accuracy depends between 60 and 70. Not so bad for now!
But how could i train (update) the model out of the xval mistakes?
Find more posts tagged with
Sort by:
1 - 5 of
51
Let's say i usually train a model by providing a classified dataset (with label) to a learner (knn). As a result i get a model witch i can use for predicting unclassified dataset.
When i use the xval in combination with performance i get as a result the accuracy of my model. In other words, the probability of how the model works on my classified data, a self-test, right?
To calculate the performance the xval- and performance-operator probably compares the labeled with the predicted data to get the accuracy-probability. Wouldn't it be the next step to learn (update the model) out of the wrong predicted data?
When i use the xval in combination with performance i get as a result the accuracy of my model. In other words, the probability of how the model works on my classified data, a self-test, right?
To calculate the performance the xval- and performance-operator probably compares the labeled with the predicted data to get the accuracy-probability. Wouldn't it be the next step to learn (update the model) out of the wrong predicted data?
Hey Steffen,
that's exactly what i was looking for. Unfortunately it takes some time to create and apply the model. But the accuracy went up from 70% to 76%.
What would be the right approach to keep training the model. Do we have to classify new training set and update the model? Or do we have to correct random wrong predicted sentences and passing them to the update model operator?
Cheers Philippe
that's exactly what i was looking for. Unfortunately it takes some time to create and apply the model. But the accuracy went up from 70% to 76%.
What would be the right approach to keep training the model. Do we have to classify new training set and update the model? Or do we have to correct random wrong predicted sentences and passing them to the update model operator?
Cheers Philippe
Ciao Sebastian