Texture of Images neural network

DocMusher
DocMusher New Altair Community Member
edited November 5 in Community Q&A
Hello all,
We are analyzing medical images of chronic wounds. All images are color calibrated in the same color space. Therefore we can quantitatively compare the images. (www.woundontology.com) One of our projects use the MAZDA applicationhttp://www.eletel.p.lodz.pl/mazda/ to analyze texture. From each image I get 3100 features. The only feature used in clinical practice in wound size. This can be measured as ROI. We would like to find a relationship and if possible a predictive value of texture on wound size. The process starts with a wound with a size(considered as 100%) and 3100 texture features. Whenever a next image is taken from this wound, the size is measured (let's say 70%) with its new 3100 features. This can be repeated for different wounds. The ultimate goal would be to predict wound size based on previous image texture analysis. How could this be developed using NN? Any other suggestions?

Thanks

Sven Van Poucke
svanpoucke[at]gmail.com
Tagged:

Answers

  • land
    land New Altair Community Member
    Hi Sven,
    I think the easiest approach would be to put the 3100 features (called attributes within RapidMiner) in one table with the wound size of the following snapshot as label. Each used learner then would try to predict the next wound size from the 3100 attributes. So each snapshot would become one example to learn from.
    This easy setup problably is worth a try, before developing more complex settings like regarding the wound snapshot as a multivariate time series...
    After you put everything into the table, you could try which learner suits your data best. You cannot decide beforehand if NeuralNetworks will do the job best, you will have to try it using a Crossvalidation.

    Greetings,
      Sebastian