🎉Community Raffle - Win $25

An exclusive raffle opportunity for active members like you! Complete your profile, answer questions and get your first accepted badge to enter the raffle.
Join and Win

Performance evalution in the recommender extension

User: "bernardo_pagnon"
New Altair Community Member
Updated by Jocelyn
Hello,

I am using the recommender extension and there is one thing I don't get it. In the item recommendation, the operator gives a ranking prediction of unseen movies for a user (unlike the rating prediction). In this case, how can one evaluate the quality of the estimator? Does it make sense to use the split operator for instance? There is no label, no benchmark!

Best,
Bernardo 


Find more posts tagged with

Sort by:
1 - 2 of 21
    User: "Telcontar120"
    New Altair Community Member
    Accepted Answer
    Right, this is inherent to unsupervised machine learning outcomes, there is no performance metric that is pre-defined.  You could try to do some diagnostics like look at the variety of different top rated movies or something similar, but that is not really a performance metric strictly speaking.  Another thing you could do is wait for users to watch and rate the movies recommended, and then compare those rankings with the original rankings generated.  But this is going to take some time to collect the relevant data.
    User: "bernardo_pagnon"
    New Altair Community Member
    OP
    Hello Brian,

    thank you for your reply. I agree with everything that you said, validating an unsupervised ML method is usually tricky. I am doing exactly as you said, looking at the different top-rated movies
    and trying to gain some insight. 

    Best,
    Bernardo