🎉Community Raffle - Win $25

An exclusive raffle opportunity for active members like you! Complete your profile, answer questions and get your first accepted badge to enter the raffle.
Join and Win

Compare predicted results from deep learning to actual in the validation set

User: "bsegal"
New Altair Community Member
Updated by Jocelyn

I am a beginner so I apologize in advance if this is obvious, but the online chat folks suggested I post here!

 

I am trying to train a deep neural network to make a binary prediction ("hard" vs "easy") based on a bunch of real number parameters and a couple of nominal parameters. I input the data from excel for the labelled training set and put a set role block to indicate the "answer" called "class" as a label. Then I passed the data to the deep learning block. I took the trained model and used a apply model block, giving an unlabelled validation set of data as the input. Wired both outputs to the results on the far right. What I get is the assigned predictions in a new column ("Prediction(class)" where "class" was the label). What I need to do now is see how well it did by comparing the actual to the prediction. Because the validation set is unlabeled, it's not present in that excel. I have them of course, in the original data, but I had removed them to make the validation set unlabeled. So basically I want to evaluate the performance of the prediction.

 

My wiring and output data are appended.

 

Thanks so much!

 

 

Sort by:
1 - 1 of 11
    User: "bsegal"
    New Altair Community Member
    OP
    Accepted Answer

    OK thanks, i will run these for now.  We do have a bunch more data, though it's not "enriched" in difficult (vs easy) cases like these original sets, which were derived after the fact to yield exactly 50/50.  The new data set is prospective and has only ~10% difficult but does have several hundred rows and growing.  I'll likely be back for help with the DL!