🎉Community Raffle - Win $25

An exclusive raffle opportunity for active members like you! Complete your profile, answer questions and get your first accepted badge to enter the raffle.
Join and Win

"Text Mining - Basic Question"

User: "ssk1974"
New Altair Community Member
Updated by Jocelyn
Hi,

I have 42 text files, each text file is a chapter from a novel. I am trying to visualize frequently occurring words in each chapter. How to approach this?

Also what other interesting problems I can solve with that dataset.

Thank you for the time and help.

Find more posts tagged with

Sort by:
1 - 6 of 61
    Hello ssk,

    you would simply read the files using a Read Document operator and then process it. In the end it's about the tokenize operator which counts the number of words.

    My friend and colleagure Marius pointed me to this tutorial for text mining: http://vancouverdata.blogspot.de/2010/11/text-analytics-with-rapidminer-loading.html this should really help you.

    If you have further questions, just post them here :)

    Best,

    Martin
    User: "ssk1974"
    New Altair Community Member
    OP
    Hi Martin,

    Thank you for the help.

    I have another question, I want to create a word cloud and show the frequently used word in it. How can I do that. The list of words from my 4 set of documents has 10,000 words in it.

    Thank you for the time and help.
    you mean some word cloud where the size of the word represents the frequency?

    If so,  I am not aware of any built in functionality. I guess it can be done using some javascript api.
    I did a quick google search

    I think it's easy to use this https://github.com/jasondavies/d3-cloud on rm sever.. Is this what you ment?
    User: "ssk1974"
    New Altair Community Member
    OP
    Thanks Martin, I needed to upload a file and tagcrowd.com, let me do that.
    I've never used this webpage. Can't you just save your words as a tsv and upload it? Or should it be done automatically?