Why do I have different values of class recall for the same data sets?
I tried automodel for my data set, lets call it DS1. The results were quite good. When I used the same approach for the same data set, but, the data were expressed in codes, not the whole words, the results were dramatically lower. In both cases, I set them as polynomials. Could you explain me, why and what to change? Thank you?
Find more posts tagged with
Sort by:
1 - 3 of
31
Hi Sarah,
it just does not make me sense. If I replace the words with numbers and deal them as polynomials, and use the same algorithm, the results should be the same.
For example, I have obtained different results even when I used the same data sheet, but once, I used the German punctuation and then, I replaced the special characters with basic characters. The results of automodel were different.
Sort by:
1 - 1 of
11
@Barborka
Auto model use split validation so in this situation for split validation Auto model use 60% for training part and 40% for test part. Now it depends on your data and split validation.
I hope this helps
Sara
Auto model use split validation so in this situation for split validation Auto model use 60% for training part and 40% for test part. Now it depends on your data and split validation.
I hope this helps
Sara
Hello
From my understanding there are some reasons:
1. Maybe it is because of cross validation or split data
2. It can be for the data ( unbalance data set with different numbers of each label)
3. Different algorithms can change the result
I hope this helps
Sara