Performance evalution in the recommender extension
Hello,
I am using the recommender extension and there is one thing I don't get it. In the item recommendation, the operator gives a ranking prediction of unseen movies for a user (unlike the rating prediction). In this case, how can one evaluate the quality of the estimator? Does it make sense to use the split operator for instance? There is no label, no benchmark!
Best,
Bernardo
I am using the recommender extension and there is one thing I don't get it. In the item recommendation, the operator gives a ranking prediction of unseen movies for a user (unlike the rating prediction). In this case, how can one evaluate the quality of the estimator? Does it make sense to use the split operator for instance? There is no label, no benchmark!
Best,
Bernardo
Find more posts tagged with
Sort by:
1 - 2 of
21

Right, this is inherent to unsupervised machine learning outcomes, there is no performance metric that is pre-defined. You could try to do some diagnostics like look at the variety of different top rated movies or something similar, but that is not really a performance metric strictly speaking. Another thing you could do is wait for users to watch and rate the movies recommended, and then compare those rankings with the original rankings generated. But this is going to take some time to collect the relevant data.
Sort by:
1 - 1 of
11
Right, this is inherent to unsupervised machine learning outcomes, there is no performance metric that is pre-defined. You could try to do some diagnostics like look at the variety of different top rated movies or something similar, but that is not really a performance metric strictly speaking. Another thing you could do is wait for users to watch and rate the movies recommended, and then compare those rankings with the original rankings generated. But this is going to take some time to collect the relevant data.