Deep Learning Extension Problem with modified classification Word2Vec

mmarag
mmarag New Altair Community Member
edited November 5 in Community Q&A

Hello and congratulations for the good job implementing DeepLearning4j into RM.

I have tried to slightly change the ready-made process of Classification of IMDB reviews using Word2Vec with classification of BBC RSS Feeds. 

 

I have this error:

  • Exception: java.lang.NullPointerException
  • Message: null
  • Stack trace:
  • com.rapidminer.example.Example.getNominalValue(Example.java:97)
  • com.rapidminer.extension.deeplearning.tools.LabeledTextProvider.nextSentence(LabeledTextProvider.java:45)
  • org.deeplearning4j.iterator.CnnSentenceDataSetIterator.preLoadTokens(CnnSentenceDataSetIterator.java:211)
  • org.deeplearning4j.iterator.CnnSentenceDataSetIterator.hasNext(CnnSentenceDataSetIterator.java:201)
  • com.rapidminer.extension.deeplearning.ioobjects.TensorIOObject.(TensorIOObject.java:62)
  • com.rapidminer.extension.deeplearning.operators.WordEmbeddingOperator.doWork(WordEmbeddingOperator.java:118)
  • com.rapidminer.operator.Operator.execute(Operator.java:1025)
  • com.rapidminer.operator.execution.SimpleUnitExecutor.execute(SimpleUnitExecutor.java:77)
  • com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:812)
  • com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:807)
  • java.security.AccessController.doPrivileged(Native Method)
  • com.rapidminer.operator.ExecutionUnit.execute(ExecutionUnit.java:807)
  • com.rapidminer.operator.OperatorChain.doWork(OperatorChain.java:428)
  • com.rapidminer.operator.Operator.execute(Operator.java:1025)
  • com.rapidminer.Process.execute(Process.java:1322)
  • com.rapidminer.Process.run(Process.java:1297)
  • com.rapidminer.Process.run(Process.java:1183)
  • com.rapidminer.Process.run(Process.java:1136)
  • com.rapidminer.Process.run(Process.java:1131)
  • com.rapidminer.Process.run(Process.java:1121)
  • com.rapidminer.gui.ProcessThread.run(ProcessThread.java:65)

my process' XML looks like this:

 

<?xml version="1.0" encoding="UTF-8"?><process version="9.0.002">
  <context>
    <input/>
    <output/>
    <macros/>
  </context>
  <operator activated="true" class="process" compatibility="9.0.002" expanded="true" name="Process" origin="GENERATED_SAMPLE">
    <process expanded="true">
      <operator activated="true" class="open_file" compatibility="9.0.002" expanded="true" height="68" name="Open File" origin="GENERATED_SAMPLE" width="90" x="45" y="748">
        <parameter key="filename" value="C:\Users\mmara\Downloads\GoogleNews-vectors-negative300.bin.gz"/>
      </operator>
      <operator activated="true" class="multiply" compatibility="9.0.002" expanded="true" height="103" name="Multiply" origin="GENERATED_SAMPLE" width="90" x="179" y="646"/>
      <operator activated="true" class="web:read_rss" compatibility="7.3.000" expanded="true" height="68" name="Read RSS Feed" width="90" x="45" y="34">
        <parameter key="url" value="http://feeds.bbci.co.uk/news/business/rss.xml"/>
      </operator>
      <operator activated="true" class="select_attributes" compatibility="9.0.002" expanded="true" height="82" name="Select Attributes" width="90" x="45" y="136">
        <parameter key="attribute_filter_type" value="single"/>
        <parameter key="attribute" value="Content"/>
      </operator>
      <operator activated="true" class="generate_attributes" compatibility="9.0.002" expanded="true" height="82" name="Generate Attributes" width="90" x="179" y="34">
        <list key="function_descriptions">
          <parameter key="class" value="&quot;business&quot;"/>
        </list>
      </operator>
      <operator activated="true" class="set_role" compatibility="9.0.002" expanded="true" height="82" name="Set Role" width="90" x="179" y="136">
        <parameter key="attribute_name" value="class"/>
        <parameter key="target_role" value="label"/>
        <list key="set_additional_roles"/>
      </operator>
      <operator activated="true" class="web:read_rss" compatibility="7.3.000" expanded="true" height="68" name="Read RSS Feed (2)" width="90" x="45" y="340">
        <parameter key="url" value="http://feeds.bbci.co.uk/news/technology/rss.xml"/>
      </operator>
      <operator activated="true" class="select_attributes" compatibility="9.0.002" expanded="true" height="82" name="Select Attributes (2)" width="90" x="45" y="442">
        <parameter key="attribute_filter_type" value="single"/>
        <parameter key="attribute" value="Content"/>
      </operator>
      <operator activated="true" class="generate_attributes" compatibility="9.0.002" expanded="true" height="82" name="Generate Attributes (2)" width="90" x="246" y="340">
        <list key="function_descriptions">
          <parameter key="class" value="&quot;technology&quot;"/>
        </list>
      </operator>
      <operator activated="true" class="set_role" compatibility="9.0.002" expanded="true" height="82" name="Set Role (2)" width="90" x="179" y="493">
        <parameter key="attribute_name" value="class"/>
        <parameter key="target_role" value="label"/>
        <list key="set_additional_roles"/>
      </operator>
      <operator activated="true" class="union" compatibility="9.0.002" expanded="true" height="82" name="Union" width="90" x="112" y="238"/>
      <operator activated="true" breakpoints="after" class="text_to_nominal" compatibility="9.0.002" expanded="true" height="82" name="Text to Nominal" width="90" x="246" y="238">
        <parameter key="attribute_filter_type" value="single"/>
        <parameter key="attribute" value="Content"/>
        <parameter key="include_special_attributes" value="true"/>
      </operator>
      <operator activated="true" class="split_data" compatibility="9.0.002" expanded="true" height="103" name="Split Data" origin="GENERATED_SAMPLE" width="90" x="380" y="289">
        <enumeration key="partitions">
          <parameter key="ratio" value="0.8"/>
          <parameter key="ratio" value="0.2"/>
        </enumeration>
        <parameter key="sampling_type" value="shuffled sampling"/>
      </operator>
      <operator activated="true" class="deeplearning:dl4j_word_embedding" compatibility="0.8.000" expanded="true" height="82" name="Text to Numbers using Word2Vec" origin="GENERATED_SAMPLE" width="90" x="380" y="85">
        <parameter key="text_attribute" value="Content"/>
        <parameter key="label_attribute" value="class"/>
        <parameter key="max._sentence_length" value="10"/>
        <description align="center" color="transparent" colored="false" width="126">Convert training sentences to numbers.</description>
      </operator>
      <operator activated="true" class="deeplearning:dl4j_word_embedding" compatibility="0.8.000" expanded="true" height="82" name="Text to Numbers using Word2Vec (2)" origin="GENERATED_SAMPLE" width="90" x="447" y="493">
        <parameter key="text_attribute" value="Content"/>
        <parameter key="label_attribute" value="class"/>
        <parameter key="max._sentence_length" value="10"/>
        <description align="center" color="transparent" colored="false" width="126">Convert testing sentences to numbers.</description>
      </operator>
      <operator activated="true" class="deeplearning:dl4j_tensor_sequential_neural_network" compatibility="0.8.000" expanded="true" height="103" name="Deep Learning on Tensors" origin="GENERATED_SAMPLE" width="90" x="514" y="85">
        <parameter key="use_miniBatch" value="true"/>
        <parameter key="updater" value="Nesterovs"/>
        <parameter key="learning_rate" value="0.1"/>
        <parameter key="infer_input_shape" value="false"/>
        <parameter key="network_type" value="Convolutional"/>
        <parameter key="height" value="10"/>
        <parameter key="width" value="300"/>
        <parameter key="depth" value="1"/>
        <process expanded="true">
          <operator activated="true" class="deeplearning:dl4j_convolutional_layer" compatibility="0.8.000" expanded="true" height="68" name="Add Convolutional Layer" origin="GENERATED_SAMPLE" width="90" x="179" y="34">
            <parameter key="kernel_size" value="2.2"/>
            <parameter key="stride_size" value="1.1"/>
            <parameter key="layer_name" value="conv"/>
            <description align="center" color="transparent" colored="false" width="126">3, 300 Kernel --&amp;gt; 3 regular kernel size; 300 number of dimensions from Googles word2vec model</description>
          </operator>
          <operator activated="true" class="deeplearning:dl4j_global_pooling_layer" compatibility="0.8.000" expanded="true" height="68" name="Add Global Pooling Layer" origin="GENERATED_SAMPLE" width="90" x="380" y="34"/>
          <operator activated="true" class="deeplearning:dl4j_dense_layer" compatibility="0.8.000" expanded="true" height="68" name="Add Dense Layer" origin="GENERATED_SAMPLE" width="90" x="581" y="34">
            <parameter key="number_of_neurons" value="2"/>
            <parameter key="activation_function" value="Softmax"/>
            <description align="center" color="transparent" colored="false" width="126">2 classes --&amp;gt; 2 neurons with softmax</description>
          </operator>
          <connect from_port="layerArchitecture" to_op="Add Convolutional Layer" to_port="layerArchitecture"/>
          <connect from_op="Add Convolutional Layer" from_port="layerArchitecture" to_op="Add Global Pooling Layer" to_port="layerArchitecture"/>
          <connect from_op="Add Global Pooling Layer" from_port="layerArchitecture" to_op="Add Dense Layer" to_port="layerArchitecture"/>
          <connect from_op="Add Dense Layer" from_port="layerArchitecture" to_port="layerArchitecture"/>
          <portSpacing port="source_layerArchitecture" spacing="0"/>
          <portSpacing port="sink_layerArchitecture" spacing="0"/>
        </process>
      </operator>
      <operator activated="true" class="deeplearning:dl4j_apply_tensor_model" compatibility="0.8.000" expanded="true" height="82" name="Apply Model on Tensor" origin="GENERATED_SAMPLE" width="90" x="648" y="187"/>
      <operator activated="true" class="performance_binominal_classification" compatibility="9.0.002" expanded="true" height="82" name="Performance" origin="GENERATED_SAMPLE" width="90" x="782" y="187"/>
      <connect from_op="Open File" from_port="file" to_op="Multiply" to_port="input"/>
      <connect from_op="Multiply" from_port="output 1" to_op="Text to Numbers using Word2Vec (2)" to_port="file with word2vec model"/>
      <connect from_op="Multiply" from_port="output 2" to_op="Text to Numbers using Word2Vec" to_port="file with word2vec model"/>
      <connect from_op="Read RSS Feed" from_port="output" to_op="Select Attributes" to_port="example set input"/>
      <connect from_op="Select Attributes" from_port="example set output" to_op="Generate Attributes" to_port="example set input"/>
      <connect from_op="Generate Attributes" from_port="example set output" to_op="Set Role" to_port="example set input"/>
      <connect from_op="Set Role" from_port="example set output" to_op="Union" to_port="example set 1"/>
      <connect from_op="Read RSS Feed (2)" from_port="output" to_op="Select Attributes (2)" to_port="example set input"/>
      <connect from_op="Select Attributes (2)" from_port="example set output" to_op="Generate Attributes (2)" to_port="example set input"/>
      <connect from_op="Generate Attributes (2)" from_port="example set output" to_op="Set Role (2)" to_port="example set input"/>
      <connect from_op="Set Role (2)" from_port="example set output" to_op="Union" to_port="example set 2"/>
      <connect from_op="Union" from_port="union" to_op="Text to Nominal" to_port="example set input"/>
      <connect from_op="Text to Nominal" from_port="example set output" to_op="Split Data" to_port="example set"/>
      <connect from_op="Split Data" from_port="partition 1" to_op="Text to Numbers using Word2Vec" to_port="example set"/>
      <connect from_op="Split Data" from_port="partition 2" to_op="Text to Numbers using Word2Vec (2)" to_port="example set"/>
      <connect from_op="Text to Numbers using Word2Vec" from_port="tensor" to_op="Deep Learning on Tensors" to_port="training set"/>
      <connect from_op="Text to Numbers using Word2Vec (2)" from_port="tensor" to_op="Apply Model on Tensor" to_port="unlabelled tensor"/>
      <connect from_op="Deep Learning on Tensors" from_port="model" to_op="Apply Model on Tensor" to_port="model"/>
      <connect from_op="Apply Model on Tensor" from_port="labeled data" to_op="Performance" to_port="labelled data"/>
      <connect from_op="Performance" from_port="performance" to_port="result 1"/>
      <portSpacing port="source_input 1" spacing="0"/>
      <portSpacing port="sink_result 1" spacing="0"/>
      <portSpacing port="sink_result 2" spacing="0"/>
    </process>
  </operator>
</process> 

 

 

I am using the 1.2GB Google file for the lexicon.

 

Regards

Manolis

Answers

  • sgenzer
    sgenzer
    Altair Employee

    hi @mmarag that's great that you're testing our new DL extension. I'm pinging our DL guru @pschlunder for follow up.

     

    Scott

     

  • mmarag
    mmarag New Altair Community Member

    thanks

     

    As a matter of fact, now the process causes RM to terminate :(

  • Pavithra_Rao
    Pavithra_Rao New Altair Community Member

    Hi @mmarag,

     

    I am trying to reproduce this error on my Studio, would it be possible to share a sample of the data in GoogleNews-vectors-negative300.bin.gz?

     

    Thanks,

  • pschlunder
    pschlunder New Altair Community Member

    Hi @mmarag,

    thanks for sharing your problem. I'll try to reproduce and share my findings with you later on.

     

    Regarding the crash of studio, did you set a manual maximum memory usage in the studio settings or leave it to default? And how much memory does the tested maschine have, if I might ask.

     

    @Pavithra_Rao please check the provided sample process "//Samples/Deep Learning/processes/06 Text classification using Word2Vec" it contains a link to the said word2vec model. The model itself is 1.65GB hence the link for self-downloading. 

     

    Regards,

    Philipp

  • mmarag
    mmarag New Altair Community Member

    Hello again,

    thanks for trying to work on a solution. I have left this field (Maximum Amount of Memory) blank as originally were. Also i am using an ASUS Zenbook, i7 and 16GB or RAM. I will try to run it on a bigger machine as well

     

    regards

  • mmarag
    mmarag New Altair Community Member

    Hello, 

    I ran the process on a huge 64GB dual Xeon machine. 6the memory consumption was about 25GB just before reaching operator no 16

     

     

    mem.jpgplease see attached image

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    when it ended, the results were not good.....

    result.jpg

     

     

     

  • mmarag
    mmarag New Altair Community Member

    I was also using the Word2Vec extension for RM and it is very quick but does not produce tensors for later training a classifier. Maybe you could see some opportunity there to collaborate on that and make an operator that does the job.

     

    Keep up the good work!

  • mmarag
    mmarag New Altair Community Member

    Hello again,

     

    did you manage to reproduce the error?

    regards

    Manolis

  • Maerkli
    Maerkli New Altair Community Member

    Hallo Manolis,

     

    How can I access the GoogleNews-vectors-negative300 file?

    Maerkli.

  • pschlunder
    pschlunder New Altair Community Member

    Hi @mmarag,

    sorry for the late reply. I'm already investigating tensor creation on the output of the Word2Vec extension from my colleague. I'll share it here, when it's done.

     

    For the time being though it might occur, that the memory consumption is a bit higher. Regarding the nets performance it would be good to know how you configured the Deep Learning operator and the network insight. Maybe you can share the process xml.

     

    Regards,

    Philipp

  • pschlunder
    pschlunder New Altair Community Member

    Hi @Maerkli,

     

    find a link to the model in the notes of sample process number 6. After having installed the extension the deep learning sample processes are inside the general //Samples/ folder of your RapidMiner Studio.

     

    Regards,

    Philipp

  • Maerkli
    Maerkli New Altair Community Member

    Merci Philipp.

    Maerkli.

  • mmarag
    mmarag New Altair Community Member

    Hello again and thank you for your interest.

     

    here is the XML of the DL operator:

    <operator activated="true" class="deeplearning:dl4j_tensor_sequential_neural_network" compatibility="0.8.000" expanded="true" height="103" name="Deep Learning on Tensors" origin="GENERATED_SAMPLE" width="90" x="514" y="85">
            <parameter key="use_miniBatch" value="true"/>
            <parameter key="updater" value="Nesterovs"/>
            <parameter key="learning_rate" value="0.1"/>
            <parameter key="infer_input_shape" value="false"/>
            <parameter key="network_type" value="Convolutional"/>
            <parameter key="height" value="10"/>
            <parameter key="width" value="300"/>
            <parameter key="depth" value="1"/>
            <process expanded="true">
              <operator activated="true" class="deeplearning:dl4j_convolutional_layer" compatibility="0.8.000" expanded="true" height="68" name="Add Convolutional Layer" origin="GENERATED_SAMPLE" width="90" x="179" y="34">
                <parameter key="kernel_size" value="2.2"/>
                <parameter key="stride_size" value="1.1"/>
                <parameter key="layer_name" value="conv"/>
                <description align="center" color="transparent" colored="false" width="126">3, 300 Kernel --&amp;gt; 3 regular kernel size; 300 number of dimensions from Googles word2vec model</description>
              </operator>
              <operator activated="true" class="deeplearning:dl4j_global_pooling_layer" compatibility="0.8.000" expanded="true" height="68" name="Add Global Pooling Layer" origin="GENERATED_SAMPLE" width="90" x="380" y="34"/>
              <operator activated="true" class="deeplearning:dl4j_dense_layer" compatibility="0.8.000" expanded="true" height="68" name="Add Dense Layer" origin="GENERATED_SAMPLE" width="90" x="581" y="34">
                <parameter key="number_of_neurons" value="2"/>
                <parameter key="activation_function" value="Softmax"/>
                <description align="center" color="transparent" colored="false" width="126">2 classes --&amp;gt; 2 neurons with softmax</description>
              </operator>
              <connect from_port="layerArchitecture" to_op="Add Convolutional Layer" to_port="layerArchitecture"/>
              <connect from_op="Add Convolutional Layer" from_port="layerArchitecture" to_op="Add Global Pooling Layer" to_port="layerArchitecture"/>
              <connect from_op="Add Global Pooling Layer" from_port="layerArchitecture" to_op="Add Dense Layer" to_port="layerArchitecture"/>
              <connect from_op="Add Dense Layer" from_port="layerArchitecture" to_port="layerArchitecture"/>
              <portSpacing port="source_layerArchitecture" spacing="0"/>
              <portSpacing port="sink_layerArchitecture" spacing="0"/>
            </process>
          </operator>

  • pschlunder
    pschlunder New Altair Community Member

    Hi @mmarag,

    somehow the number of activation maps is missing from the process. How many activation maps did you set for the conv layer?

     

    A learning rate of 0.1 is in many scenarios a bit too high. Please check the sample process explaining the usage of the history plot to find a better learning rate. E.g. starting with 0.0001 and increasing the number of epochs.

     

    Regards,

    Philipp

  • mmarag
    mmarag New Altair Community Member

    Hello,

     

    i used the exact process from the samples folder, //Samples/Deep Learning/processes/06 Text classification using Word2Vec only that insted of loading the movie review dataset i used RSS Feeds. 

     

    Nevertheless, even with ε=0,01 the process crashes when exceeding 20GB of RAM

     

    regards

  • Maerkli
    Maerkli New Altair Community Member

    I don't know if I understand the problem correctly but it seems to work. I apply the XML file. As I cannot open ((C:\Users\mmara\Downloads\))GoogleNews-vectors-negative300.bin.gz, I desactivate the operator Open File and start the execution. In Results view,

    ExampleSet (115 examples, 1 special attribute, 1 regular attribute) is available. Am I doing something wrong?

    Maerkli

  • lbookman
    lbookman New Altair Community Member

    Hi

    I tried running the original sample process in the deep learning folder: Text classification using Word2Vec. I have installed the deep learning extension. Here is the link to file GoogleNews-vectors-negative300.bin.gz that you need to download for the Open File operator:

    https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit

     

    I got the following error from the log file:

    Error during meta data transformation: java.lang.RuntimeException: Cannot clone <br/>Number of Examples: = 375<br/>2 attributes: text, label<br/>Label: <em>label</em>: label (nominal in = {<span style="text-decoration:underline">negative</span>, positive}; may contain missing values)<br/><small><strong>Dimensions:</strong> (375, 1, 10, -1)</small><br/><small><strong>Note:</strong> A dimension of -1 represents a dimension, that can't be estimated at the current point of the process.</small>