"Error with Process Documents from Data when attempting text analysis"
Joanna_Arnold
New Altair Community Member
Hello,
I am very new to RapidMiner, so please forgive any base ignorance on my part. But I am trying to do basic text processing of an excel spreadsheet of tweets and I cannot get the "Process Documents from Data" operation to work correctly.
I watched the video tutorial but I am still having problems
What I did:
1) Read Excel -> Nominal to Text, Click Run (This seems to work, in the results I see my spreadsheet with the tweets.
But when I go to "Process Documents from Data" it gives me a warning that the example set should contain at least one text attribute, and when I run it all I get is an empty page.
I have attached a screenshot of the process with the error message as well as my excel data.
Thank you in advance for any help!
I am very new to RapidMiner, so please forgive any base ignorance on my part. But I am trying to do basic text processing of an excel spreadsheet of tweets and I cannot get the "Process Documents from Data" operation to work correctly.
I watched the video tutorial but I am still having problems
What I did:
1) Read Excel -> Nominal to Text, Click Run (This seems to work, in the results I see my spreadsheet with the tweets.
But when I go to "Process Documents from Data" it gives me a warning that the example set should contain at least one text attribute, and when I run it all I get is an empty page.
I have attached a screenshot of the process with the error message as well as my excel data.
Thank you in advance for any help!
0
Best Answer
-
I just tried on your data and did not see any problems. Below is the XML of the process I have built on your data. Read here how to use it: https://community.rapidminer.com/discussion/51369
<?xml version="1.0" encoding="UTF-8"?><process version="9.2.000"><br> <context><br> <input/><br> <output/><br> <macros/><br> </context><br> <operator activated="true" class="process" compatibility="9.2.000" expanded="true" name="Process"><br> <parameter key="logverbosity" value="init"/><br> <parameter key="random_seed" value="2001"/><br> <parameter key="send_mail" value="never"/><br> <parameter key="notification_email" value=""/><br> <parameter key="process_duration_for_mail" value="30"/><br> <parameter key="encoding" value="UTF-8"/><br> <process expanded="true"><br> <operator activated="true" class="read_excel" compatibility="9.2.000" expanded="true" height="68" name="Read Excel" width="90" x="45" y="34"><br> <parameter key="excel_file" value="C:UsersIngoMierswaDesktopFoodForRM.xlsx"/><br> <parameter key="sheet_selection" value="sheet number"/><br> <parameter key="sheet_number" value="1"/><br> <parameter key="imported_cell_range" value="A1"/><br> <parameter key="encoding" value="UTF-8"/><br> <parameter key="first_row_as_names" value="true"/><br> <list key="annotations"/><br> <parameter key="date_format" value=""/><br> <parameter key="time_zone" value="SYSTEM"/><br> <parameter key="locale" value="English (United States)"/><br> <parameter key="read_all_values_as_polynominal" value="false"/><br> <list key="data_set_meta_data_information"><br> <parameter key="0" value="Tweets.true.polynominal.attribute"/><br> </list><br> <parameter key="read_not_matching_values_as_missings" value="false"/><br> <parameter key="datamanagement" value="double_array"/><br> <parameter key="data_management" value="auto"/><br> </operator><br> <operator activated="true" class="nominal_to_text" compatibility="9.2.000" expanded="true" height="82" name="Nominal to Text" width="90" x="179" y="34"><br> <parameter key="attribute_filter_type" value="all"/><br> <parameter key="attribute" value=""/><br> <parameter key="attributes" value=""/><br> <parameter key="use_except_expression" value="false"/><br> <parameter key="value_type" value="nominal"/><br> <parameter key="use_value_type_exception" value="false"/><br> <parameter key="except_value_type" value="file_path"/><br> <parameter key="block_type" value="single_value"/><br> <parameter key="use_block_type_exception" value="false"/><br> <parameter key="except_block_type" value="single_value"/><br> <parameter key="invert_selection" value="false"/><br> <parameter key="include_special_attributes" value="false"/><br> </operator><br> <operator activated="true" class="text:process_document_from_data" compatibility="8.1.000" expanded="true" height="82" name="Process Documents from Data" width="90" x="313" y="34"><br> <parameter key="create_word_vector" value="true"/><br> <parameter key="vector_creation" value="TF-IDF"/><br> <parameter key="add_meta_information" value="true"/><br> <parameter key="keep_text" value="false"/><br> <parameter key="prune_method" value="none"/><br> <parameter key="prune_below_percent" value="3.0"/><br> <parameter key="prune_above_percent" value="30.0"/><br> <parameter key="prune_below_rank" value="0.05"/><br> <parameter key="prune_above_rank" value="0.95"/><br> <parameter key="datamanagement" value="double_sparse_array"/><br> <parameter key="data_management" value="auto"/><br> <parameter key="select_attributes_and_weights" value="false"/><br> <list key="specify_weights"/><br> <process expanded="true"><br> <operator activated="true" class="text:tokenize" compatibility="8.1.000" expanded="true" height="68" name="Tokenize" width="90" x="45" y="34"><br> <parameter key="mode" value="non letters"/><br> <parameter key="characters" value=".:"/><br> <parameter key="language" value="English"/><br> <parameter key="max_token_length" value="3"/><br> </operator><br> <connect from_port="document" to_op="Tokenize" to_port="document"/><br> <connect from_op="Tokenize" from_port="document" to_port="document 1"/><br> <portSpacing port="source_document" spacing="0"/><br> <portSpacing port="sink_document 1" spacing="0"/><br> <portSpacing port="sink_document 2" spacing="0"/><br> </process><br> </operator><br> <connect from_op="Read Excel" from_port="output" to_op="Nominal to Text" to_port="example set input"/><br> <connect from_op="Nominal to Text" from_port="example set output" to_op="Process Documents from Data" to_port="example set"/><br> <connect from_op="Process Documents from Data" from_port="example set" to_port="result 1"/><br> <portSpacing port="source_input 1" spacing="0"/><br> <portSpacing port="sink_result 1" spacing="0"/><br> <portSpacing port="sink_result 2" spacing="0"/><br> </process><br> </operator><br></process><br>
Please import this XML and check for differences to your process - I am sure you will spot the reason why it failed in your case...Hope this helps,
Ingo
1
Answers
-
I just tried on your data and did not see any problems. Below is the XML of the process I have built on your data. Read here how to use it: https://community.rapidminer.com/discussion/51369
<?xml version="1.0" encoding="UTF-8"?><process version="9.2.000"><br> <context><br> <input/><br> <output/><br> <macros/><br> </context><br> <operator activated="true" class="process" compatibility="9.2.000" expanded="true" name="Process"><br> <parameter key="logverbosity" value="init"/><br> <parameter key="random_seed" value="2001"/><br> <parameter key="send_mail" value="never"/><br> <parameter key="notification_email" value=""/><br> <parameter key="process_duration_for_mail" value="30"/><br> <parameter key="encoding" value="UTF-8"/><br> <process expanded="true"><br> <operator activated="true" class="read_excel" compatibility="9.2.000" expanded="true" height="68" name="Read Excel" width="90" x="45" y="34"><br> <parameter key="excel_file" value="C:UsersIngoMierswaDesktopFoodForRM.xlsx"/><br> <parameter key="sheet_selection" value="sheet number"/><br> <parameter key="sheet_number" value="1"/><br> <parameter key="imported_cell_range" value="A1"/><br> <parameter key="encoding" value="UTF-8"/><br> <parameter key="first_row_as_names" value="true"/><br> <list key="annotations"/><br> <parameter key="date_format" value=""/><br> <parameter key="time_zone" value="SYSTEM"/><br> <parameter key="locale" value="English (United States)"/><br> <parameter key="read_all_values_as_polynominal" value="false"/><br> <list key="data_set_meta_data_information"><br> <parameter key="0" value="Tweets.true.polynominal.attribute"/><br> </list><br> <parameter key="read_not_matching_values_as_missings" value="false"/><br> <parameter key="datamanagement" value="double_array"/><br> <parameter key="data_management" value="auto"/><br> </operator><br> <operator activated="true" class="nominal_to_text" compatibility="9.2.000" expanded="true" height="82" name="Nominal to Text" width="90" x="179" y="34"><br> <parameter key="attribute_filter_type" value="all"/><br> <parameter key="attribute" value=""/><br> <parameter key="attributes" value=""/><br> <parameter key="use_except_expression" value="false"/><br> <parameter key="value_type" value="nominal"/><br> <parameter key="use_value_type_exception" value="false"/><br> <parameter key="except_value_type" value="file_path"/><br> <parameter key="block_type" value="single_value"/><br> <parameter key="use_block_type_exception" value="false"/><br> <parameter key="except_block_type" value="single_value"/><br> <parameter key="invert_selection" value="false"/><br> <parameter key="include_special_attributes" value="false"/><br> </operator><br> <operator activated="true" class="text:process_document_from_data" compatibility="8.1.000" expanded="true" height="82" name="Process Documents from Data" width="90" x="313" y="34"><br> <parameter key="create_word_vector" value="true"/><br> <parameter key="vector_creation" value="TF-IDF"/><br> <parameter key="add_meta_information" value="true"/><br> <parameter key="keep_text" value="false"/><br> <parameter key="prune_method" value="none"/><br> <parameter key="prune_below_percent" value="3.0"/><br> <parameter key="prune_above_percent" value="30.0"/><br> <parameter key="prune_below_rank" value="0.05"/><br> <parameter key="prune_above_rank" value="0.95"/><br> <parameter key="datamanagement" value="double_sparse_array"/><br> <parameter key="data_management" value="auto"/><br> <parameter key="select_attributes_and_weights" value="false"/><br> <list key="specify_weights"/><br> <process expanded="true"><br> <operator activated="true" class="text:tokenize" compatibility="8.1.000" expanded="true" height="68" name="Tokenize" width="90" x="45" y="34"><br> <parameter key="mode" value="non letters"/><br> <parameter key="characters" value=".:"/><br> <parameter key="language" value="English"/><br> <parameter key="max_token_length" value="3"/><br> </operator><br> <connect from_port="document" to_op="Tokenize" to_port="document"/><br> <connect from_op="Tokenize" from_port="document" to_port="document 1"/><br> <portSpacing port="source_document" spacing="0"/><br> <portSpacing port="sink_document 1" spacing="0"/><br> <portSpacing port="sink_document 2" spacing="0"/><br> </process><br> </operator><br> <connect from_op="Read Excel" from_port="output" to_op="Nominal to Text" to_port="example set input"/><br> <connect from_op="Nominal to Text" from_port="example set output" to_op="Process Documents from Data" to_port="example set"/><br> <connect from_op="Process Documents from Data" from_port="example set" to_port="result 1"/><br> <portSpacing port="source_input 1" spacing="0"/><br> <portSpacing port="sink_result 1" spacing="0"/><br> <portSpacing port="sink_result 2" spacing="0"/><br> </process><br> </operator><br></process><br>
Please import this XML and check for differences to your process - I am sure you will spot the reason why it failed in your case...Hope this helps,
Ingo
1 -
Thank you so much! This script fixed the issue. I really appreciate it.1