I have a 100K text records, each have a problem description. now want to achieve :
1) Take a count of "Similar" looking problem description. (How to achieve this)
2) Main roadblocker is that it takes a lifetime to run process.
Steps :
Data import-->select attributes-->Process document to Data(tokenize.stopwords,n grams)-->K means/DBSCAN
How can I optimize this to run faster.
I can make Process documen to Data to run faster, but the Clustering takes away more than a day to run.
I am using community verison 7.5. Please suggest how can I decrease run time. And will enterprise version solve the problem ?