🎉Community Raffle - Win $25

An exclusive raffle opportunity for active members like you! Complete your profile, answer questions and get your first accepted badge to enter the raffle.
Join and Win

Help using DL4J

User: "pblack476"
New Altair Community Member
Updated by Jocelyn
As an exercise I have this process:

<?xml version="1.0" encoding="UTF-8"?><process version="9.4.001"><br>  <context><br>    <input/><br>    <output/><br>    <macros/><br>  </context><br>  <operator activated="true" class="process" compatibility="9.4.001" expanded="true" name="Process"><br>    <parameter key="logverbosity" value="init"/><br>    <parameter key="random_seed" value="2001"/><br>    <parameter key="send_mail" value="never"/><br>    <parameter key="notification_email" value=""/><br>    <parameter key="process_duration_for_mail" value="30"/><br>    <parameter key="encoding" value="SYSTEM"/><br>    <process expanded="true"><br>      <operator activated="true" class="retrieve" compatibility="9.4.001" expanded="true" height="68" name="Retrieve XPCM11.SA -DAILY - clean 5 daysignal" width="90" x="45" y="34"><br>        <parameter key="repository_entry" value="//Local Repository/data/XPCM11.SA -DAILY - clean 5 daysignal"/><br>      </operator><br>      <operator activated="true" class="subprocess" compatibility="9.4.001" expanded="true" height="82" name="AFE" width="90" x="179" y="34"><br>        <process expanded="true"><br>          <operator activated="true" class="set_role" compatibility="9.4.001" expanded="true" height="82" name="Set Role" width="90" x="45" y="34"><br>            <parameter key="attribute_name" value="SIG CHANGE"/><br>            <parameter key="target_role" value="label"/><br>            <list key="set_additional_roles"><br>              <parameter key="Date" value="id"/><br>            </list><br>          </operator><br>          <operator activated="true" class="normalize" compatibility="9.4.001" expanded="true" height="103" name="Normalize" width="90" x="179" y="34"><br>            <parameter key="return_preprocessing_model" value="false"/><br>            <parameter key="create_view" value="false"/><br>            <parameter key="attribute_filter_type" value="value_type"/><br>            <parameter key="attribute" value=""/><br>            <parameter key="attributes" value=""/><br>            <parameter key="use_except_expression" value="false"/><br>            <parameter key="value_type" value="numeric"/><br>            <parameter key="use_value_type_exception" value="false"/><br>            <parameter key="except_value_type" value="real"/><br>            <parameter key="block_type" value="value_series"/><br>            <parameter key="use_block_type_exception" value="false"/><br>            <parameter key="except_block_type" value="value_series_end"/><br>            <parameter key="invert_selection" value="false"/><br>            <parameter key="include_special_attributes" value="false"/><br>            <parameter key="method" value="Z-transformation"/><br>            <parameter key="min" value="0.0"/><br>            <parameter key="max" value="1.0"/><br>            <parameter key="allow_negative_values" value="false"/><br>          </operator><br>          <operator activated="true" class="multiply" compatibility="9.4.001" expanded="true" height="103" name="Multiply" width="90" x="313" y="34"/><br>          <operator activated="true" class="model_simulator:automatic_feature_engineering" compatibility="9.4.001" expanded="true" height="103" name="Automatic Feature Engineering" width="90" x="514" y="187"><br>            <parameter key="mode" value="feature selection and generation"/><br>            <parameter key="balance for accuracy" value="1.0"/><br>            <parameter key="show progress dialog" value="true"/><br>            <parameter key="use_local_random_seed" value="false"/><br>            <parameter key="local_random_seed" value="1992"/><br>            <parameter key="use optimization heuristics" value="true"/><br>            <parameter key="maximum generations" value="30"/><br>            <parameter key="population size" value="10"/><br>            <parameter key="use multi-starts" value="true"/><br>            <parameter key="number of multi-starts" value="5"/><br>            <parameter key="generations until multi-start" value="10"/><br>            <parameter key="use time limit" value="false"/><br>            <parameter key="time limit in seconds" value="60"/><br>            <parameter key="use subset for generation" value="false"/><br>            <parameter key="maximum function complexity" value="10"/><br>            <parameter key="use_plus" value="true"/><br>            <parameter key="use_diff" value="true"/><br>            <parameter key="use_mult" value="true"/><br>            <parameter key="use_div" value="true"/><br>            <parameter key="reciprocal_value" value="true"/><br>            <parameter key="use_square_roots" value="true"/><br>            <parameter key="use_exp" value="true"/><br>            <parameter key="use_log" value="true"/><br>            <parameter key="use_absolute_values" value="true"/><br>            <parameter key="use_sgn" value="true"/><br>            <parameter key="use_min" value="true"/><br>            <parameter key="use_max" value="true"/><br>            <process expanded="true"><br>              <operator activated="true" class="concurrency:cross_validation" compatibility="9.4.001" expanded="true" height="145" name="Cross Validation" width="90" x="45" y="34"><br>                <parameter key="split_on_batch_attribute" value="false"/><br>                <parameter key="leave_one_out" value="false"/><br>                <parameter key="number_of_folds" value="10"/><br>                <parameter key="sampling_type" value="automatic"/><br>                <parameter key="use_local_random_seed" value="true"/><br>                <parameter key="local_random_seed" value="1992"/><br>                <parameter key="enable_parallel_execution" value="true"/><br>                <process expanded="true"><br>                  <operator activated="true" class="deeplearning:dl4j_sequential_neural_network" compatibility="0.9.001" expanded="true" height="103" name="Deep Learning" width="90" x="112" y="34"><br>                    <parameter key="loss_function" value="Negative Log Likelihood (Classification)"/><br>                    <parameter key="epochs" value="100"/><br>                    <parameter key="use_miniBatch" value="false"/><br>                    <parameter key="batch_size" value="32"/><br>                    <parameter key="updater" value="RMSProp"/><br>                    <parameter key="learning_rate" value="0.01"/><br>                    <parameter key="momentum" value="0.9"/><br>                    <parameter key="rho" value="0.95"/><br>                    <parameter key="epsilon" value="1.0E-6"/><br>                    <parameter key="beta1" value="0.9"/><br>                    <parameter key="beta2" value="0.999"/><br>                    <parameter key="RMSdecay" value="0.95"/><br>                    <parameter key="weight_initialization" value="Normal"/><br>                    <parameter key="bias_initialization" value="0.0"/><br>                    <parameter key="use_regularization" value="false"/><br>                    <parameter key="l1_strength" value="0.1"/><br>                    <parameter key="l2_strength" value="0.1"/><br>                    <parameter key="optimization_method" value="Stochastic Gradient Descent"/><br>                    <parameter key="backpropagation" value="Standard"/><br>                    <parameter key="backpropagation_length" value="50"/><br>                    <parameter key="infer_input_shape" value="true"/><br>                    <parameter key="network_type" value="Simple Neural Network"/><br>                    <parameter key="log_each_epoch" value="true"/><br>                    <parameter key="epochs_per_log" value="10"/><br>                    <parameter key="use_local_random_seed" value="false"/><br>                    <parameter key="local_random_seed" value="1992"/><br>                    <process expanded="true"><br>                      <operator activated="true" class="deeplearning:dl4j_lstm_layer" compatibility="0.9.001" expanded="true" height="68" name="Add LSTM Layer (3)" width="90" x="112" y="34"><br>                        <parameter key="neurons" value="8"/><br>                        <parameter key="gate_activation" value="TanH"/><br>                        <parameter key="forget_gate_bias_initialization" value="1.0"/><br>                      </operator><br>                      <operator activated="true" class="deeplearning:dl4j_dense_layer" compatibility="0.9.001" expanded="true" height="68" name="Add Fully-Connected Layer (3)" width="90" x="246" y="34"><br>                        <parameter key="number_of_neurons" value="3"/><br>                        <parameter key="activation_function" value="Softmax"/><br>                        <parameter key="use_dropout" value="false"/><br>                        <parameter key="dropout_rate" value="0.25"/><br>                        <parameter key="overwrite_networks_weight_initialization" value="false"/><br>                        <parameter key="weight_initialization" value="Normal"/><br>                        <parameter key="overwrite_networks_bias_initialization" value="false"/><br>                        <parameter key="bias_initialization" value="0.0"/><br>                      </operator><br>                      <connect from_port="layerArchitecture" to_op="Add LSTM Layer (3)" to_port="layerArchitecture"/><br>                      <connect from_op="Add LSTM Layer (3)" from_port="layerArchitecture" to_op="Add Fully-Connected Layer (3)" to_port="layerArchitecture"/><br>                      <connect from_op="Add Fully-Connected Layer (3)" from_port="layerArchitecture" to_port="layerArchitecture"/><br>                      <portSpacing port="source_layerArchitecture" spacing="0"/><br>                      <portSpacing port="sink_layerArchitecture" spacing="0"/><br>                    </process><br>                  </operator><br>                  <connect from_port="training set" to_op="Deep Learning" to_port="training set"/><br>                  <connect from_op="Deep Learning" from_port="model" to_port="model"/><br>                  <portSpacing port="source_training set" spacing="0"/><br>                  <portSpacing port="sink_model" spacing="0"/><br>                  <portSpacing port="sink_through 1" spacing="0"/><br>                </process><br>                <process expanded="true"><br>                  <operator activated="true" class="apply_model" compatibility="9.4.001" expanded="true" height="82" name="Apply Model" width="90" x="112" y="34"><br>                    <list key="application_parameters"/><br>                    <parameter key="create_view" value="false"/><br>                  </operator><br>                  <operator activated="true" class="performance_classification" compatibility="9.4.001" expanded="true" height="82" name="Performance (2)" width="90" x="246" y="34"><br>                    <parameter key="main_criterion" value="classification_error"/><br>                    <parameter key="accuracy" value="false"/><br>                    <parameter key="classification_error" value="true"/><br>                    <parameter key="kappa" value="true"/><br>                    <parameter key="weighted_mean_recall" value="true"/><br>                    <parameter key="weighted_mean_precision" value="true"/><br>                    <parameter key="spearman_rho" value="true"/><br>                    <parameter key="kendall_tau" value="true"/><br>                    <parameter key="absolute_error" value="true"/><br>                    <parameter key="relative_error" value="true"/><br>                    <parameter key="relative_error_lenient" value="true"/><br>                    <parameter key="relative_error_strict" value="true"/><br>                    <parameter key="normalized_absolute_error" value="true"/><br>                    <parameter key="root_mean_squared_error" value="true"/><br>                    <parameter key="root_relative_squared_error" value="true"/><br>                    <parameter key="squared_error" value="true"/><br>                    <parameter key="correlation" value="true"/><br>                    <parameter key="squared_correlation" value="true"/><br>                    <parameter key="cross-entropy" value="false"/><br>                    <parameter key="margin" value="false"/><br>                    <parameter key="soft_margin_loss" value="false"/><br>                    <parameter key="logistic_loss" value="false"/><br>                    <parameter key="skip_undefined_labels" value="true"/><br>                    <parameter key="use_example_weights" value="true"/><br>                    <list key="class_weights"/><br>                  </operator><br>                  <connect from_port="model" to_op="Apply Model" to_port="model"/><br>                  <connect from_port="test set" to_op="Apply Model" to_port="unlabelled data"/><br>                  <connect from_op="Apply Model" from_port="labelled data" to_op="Performance (2)" to_port="labelled data"/><br>                  <connect from_op="Performance (2)" from_port="performance" to_port="performance 1"/><br>                  <portSpacing port="source_model" spacing="0"/><br>                  <portSpacing port="source_test set" spacing="0"/><br>                  <portSpacing port="source_through 1" spacing="0"/><br>                  <portSpacing port="sink_test set results" spacing="0"/><br>                  <portSpacing port="sink_performance 1" spacing="0"/><br>                  <portSpacing port="sink_performance 2" spacing="0"/><br>                </process><br>              </operator><br>              <connect from_port="example set source" to_op="Cross Validation" to_port="example set"/><br>              <connect from_op="Cross Validation" from_port="performance 1" to_port="performance sink"/><br>              <portSpacing port="source_example set source" spacing="0"/><br>              <portSpacing port="sink_performance sink" spacing="0"/><br>            </process><br>          </operator><br>          <operator activated="true" class="model_simulator:apply_feature_set" compatibility="9.4.001" expanded="true" height="82" name="Apply Feature Set" width="90" x="715" y="34"><br>            <parameter key="handle missings" value="true"/><br>            <parameter key="keep originals" value="false"/><br>            <parameter key="originals special role" value="true"/><br>            <parameter key="recreate missing attributes" value="true"/><br>          </operator><br>          <connect from_port="in 1" to_op="Set Role" to_port="example set input"/><br>          <connect from_op="Set Role" from_port="example set output" to_op="Normalize" to_port="example set input"/><br>          <connect from_op="Normalize" from_port="example set output" to_op="Multiply" to_port="input"/><br>          <connect from_op="Multiply" from_port="output 1" to_op="Apply Feature Set" to_port="example set"/><br>          <connect from_op="Multiply" from_port="output 2" to_op="Automatic Feature Engineering" to_port="example set in"/><br>          <connect from_op="Automatic Feature Engineering" from_port="feature set" to_op="Apply Feature Set" to_port="feature set"/><br>          <connect from_op="Apply Feature Set" from_port="example set" to_port="out 1"/><br>          <portSpacing port="source_in 1" spacing="0"/><br>          <portSpacing port="source_in 2" spacing="0"/><br>          <portSpacing port="sink_out 1" spacing="0"/><br>          <portSpacing port="sink_out 2" spacing="0"/><br>        </process><br>      </operator><br>      <operator activated="true" class="split_data" compatibility="9.4.001" expanded="true" height="103" name="Split Data" width="90" x="313" y="34"><br>        <enumeration key="partitions"><br>          <parameter key="ratio" value="0.9"/><br>          <parameter key="ratio" value="0.1"/><br>        </enumeration><br>        <parameter key="sampling_type" value="automatic"/><br>        <parameter key="use_local_random_seed" value="false"/><br>        <parameter key="local_random_seed" value="1992"/><br>      </operator><br>      <operator activated="true" class="deeplearning:dl4j_sequential_neural_network" compatibility="0.9.001" expanded="true" height="103" name="Deep Learning (2)" width="90" x="514" y="34"><br>        <parameter key="loss_function" value="Multiclass Cross Entropy (Classification)"/><br>        <parameter key="epochs" value="100"/><br>        <parameter key="use_miniBatch" value="false"/><br>        <parameter key="batch_size" value="32"/><br>        <parameter key="updater" value="Adam"/><br>        <parameter key="learning_rate" value="0.01"/><br>        <parameter key="momentum" value="0.9"/><br>        <parameter key="rho" value="0.95"/><br>        <parameter key="epsilon" value="1.0E-6"/><br>        <parameter key="beta1" value="0.9"/><br>        <parameter key="beta2" value="0.999"/><br>        <parameter key="RMSdecay" value="0.95"/><br>        <parameter key="weight_initialization" value="Normal"/><br>        <parameter key="bias_initialization" value="0.0"/><br>        <parameter key="use_regularization" value="false"/><br>        <parameter key="l1_strength" value="0.1"/><br>        <parameter key="l2_strength" value="0.1"/><br>        <parameter key="optimization_method" value="Stochastic Gradient Descent"/><br>        <parameter key="backpropagation" value="Standard"/><br>        <parameter key="backpropagation_length" value="50"/><br>        <parameter key="infer_input_shape" value="true"/><br>        <parameter key="network_type" value="Simple Neural Network"/><br>        <parameter key="log_each_epoch" value="true"/><br>        <parameter key="epochs_per_log" value="10"/><br>        <parameter key="use_local_random_seed" value="false"/><br>        <parameter key="local_random_seed" value="1992"/><br>        <process expanded="true"><br>          <operator activated="true" class="deeplearning:dl4j_lstm_layer" compatibility="0.9.001" expanded="true" height="68" name="Add LSTM Layer" width="90" x="112" y="34"><br>            <parameter key="neurons" value="8"/><br>            <parameter key="gate_activation" value="ReLU (Rectified Linear Unit)"/><br>            <parameter key="forget_gate_bias_initialization" value="1.0"/><br>          </operator><br>          <operator activated="true" class="deeplearning:dl4j_dense_layer" compatibility="0.9.001" expanded="true" height="68" name="Add Fully-Connected Layer" width="90" x="246" y="34"><br>            <parameter key="number_of_neurons" value="3"/><br>            <parameter key="activation_function" value="Softmax"/><br>            <parameter key="use_dropout" value="false"/><br>            <parameter key="dropout_rate" value="0.25"/><br>            <parameter key="overwrite_networks_weight_initialization" value="false"/><br>            <parameter key="weight_initialization" value="Normal"/><br>            <parameter key="overwrite_networks_bias_initialization" value="false"/><br>            <parameter key="bias_initialization" value="0.0"/><br>          </operator><br>          <connect from_port="layerArchitecture" to_op="Add LSTM Layer" to_port="layerArchitecture"/><br>          <connect from_op="Add LSTM Layer" from_port="layerArchitecture" to_op="Add Fully-Connected Layer" to_port="layerArchitecture"/><br>          <connect from_op="Add Fully-Connected Layer" from_port="layerArchitecture" to_port="layerArchitecture"/><br>          <portSpacing port="source_layerArchitecture" spacing="0"/><br>          <portSpacing port="sink_layerArchitecture" spacing="0"/><br>        </process><br>      </operator><br>      <operator activated="true" class="apply_model" compatibility="9.4.001" expanded="true" height="82" name="Apply Model (2)" width="90" x="514" y="238"><br>        <list key="application_parameters"/><br>        <parameter key="create_view" value="false"/><br>      </operator><br>      <operator activated="true" class="multiply" compatibility="9.4.001" expanded="true" height="103" name="Multiply (2)" width="90" x="648" y="238"/><br>      <operator activated="true" class="performance_classification" compatibility="9.4.001" expanded="true" height="82" name="Performance" width="90" x="782" y="289"><br>        <parameter key="main_criterion" value="classification_error"/><br>        <parameter key="accuracy" value="true"/><br>        <parameter key="classification_error" value="true"/><br>        <parameter key="kappa" value="true"/><br>        <parameter key="weighted_mean_recall" value="true"/><br>        <parameter key="weighted_mean_precision" value="true"/><br>        <parameter key="spearman_rho" value="true"/><br>        <parameter key="kendall_tau" value="true"/><br>        <parameter key="absolute_error" value="true"/><br>        <parameter key="relative_error" value="true"/><br>        <parameter key="relative_error_lenient" value="true"/><br>        <parameter key="relative_error_strict" value="true"/><br>        <parameter key="normalized_absolute_error" value="true"/><br>        <parameter key="root_mean_squared_error" value="true"/><br>        <parameter key="root_relative_squared_error" value="true"/><br>        <parameter key="squared_error" value="true"/><br>        <parameter key="correlation" value="true"/><br>        <parameter key="squared_correlation" value="true"/><br>        <parameter key="cross-entropy" value="false"/><br>        <parameter key="margin" value="false"/><br>        <parameter key="soft_margin_loss" value="false"/><br>        <parameter key="logistic_loss" value="false"/><br>        <parameter key="skip_undefined_labels" value="true"/><br>        <parameter key="use_example_weights" value="true"/><br>        <list key="class_weights"/><br>      </operator><br>      <connect from_op="Retrieve XPCM11.SA -DAILY - clean 5 daysignal" from_port="output" to_op="AFE" to_port="in 1"/><br>      <connect from_op="AFE" from_port="out 1" to_op="Split Data" to_port="example set"/><br>      <connect from_op="Split Data" from_port="partition 1" to_op="Deep Learning (2)" to_port="training set"/><br>      <connect from_op="Split Data" from_port="partition 2" to_op="Apply Model (2)" to_port="unlabelled data"/><br>      <connect from_op="Deep Learning (2)" from_port="model" to_op="Apply Model (2)" to_port="model"/><br>      <connect from_op="Apply Model (2)" from_port="labelled data" to_op="Multiply (2)" to_port="input"/><br>      <connect from_op="Multiply (2)" from_port="output 1" to_port="result 2"/><br>      <connect from_op="Multiply (2)" from_port="output 2" to_op="Performance" to_port="labelled data"/><br>      <connect from_op="Performance" from_port="performance" to_port="result 1"/><br>      <portSpacing port="source_input 1" spacing="0"/><br>      <portSpacing port="sink_result 1" spacing="0"/><br>      <portSpacing port="sink_result 2" spacing="0"/><br>      <portSpacing port="sink_result 3" spacing="0"/><br>    </process><br>  </operator><br></process><br><br>
The issue is that I cannot get it to run. I get an error saying "there seems to be nothing wrong with this process but It failed to run". activating Debug mode I get this:

Exception: java.lang.ArrayIndexOutOfBoundsException
Message: null
Stack trace:

  sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
  java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1005)
  com.rapidminer.studio.concurrency.internal.AbstractConcurrencyContext.collectResults(AbstractConcurrencyContext.java:206)
  com.rapidminer.studio.concurrency.internal.StudioConcurrencyContext.collectResults(StudioConcurrencyContext.java:33)
  com.rapidminer.studio.concurrency.internal.AbstractConcurrencyContext.call(AbstractConcurrencyContext.java:141)
  com.rapidminer.studio.concurrency.internal.StudioConcurrencyContext.call(StudioConcurrencyContext.java:33)
  com.rapidminer.Process.executeRootInPool(Process.java:1355)
  com.rapidminer.Process.execute(Process.java:1319)
  com.rapidminer.Process.run(Process.java:1291)
  com.rapidminer.Process.run(Process.java:1177)
  com.rapidminer.Process.run(Process.java:1130)
  com.rapidminer.Process.run(Process.java:1125)
  com.rapidminer.Process.run(Process.java:1115)
  com.rapidminer.gui.ProcessThread.run(ProcessThread.java:65)

Cause
Exception: java.lang.ArrayIndexOutOfBoundsException
Message: null
Stack trace:

  sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
  java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
  java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735)
  com.rapidminer.studio.concurrency.internal.RecursiveWrapper.call(RecursiveWrapper.java:120)
  com.rapidminer.studio.concurrency.internal.AbstractConcurrencyContext.call(AbstractConcurrencyContext.java:135)
  com.rapidminer.studio.concurrency.internal.StudioConcurrencyContext.call(StudioConcurrencyContext.java:33)
  com.rapidminer.extension.concurrency.execution.BackgroundExecutionService.executeOperatorTasks(BackgroundExecutionService.java:401)
  com.rapidminer.extension.concurrency.operator.validation.CrossValidationOperator.performParallelValidation(CrossValidationOperator.java:667)
  com.rapidminer.extension.concurrency.operator.validation.CrossValidationOperator.doExampleSetWork(CrossValidationOperator.java:311)
  com.rapidminer.extension.concurrency.operator.validation.CrossValidationOperator.doWork(CrossValidationOperator.java:243)
  com.rapidminer.operator.Operator.execute(Operator.java:1031)
  com.rapidminer.operator.execution.SimpleUnitExecutor.execute(SimpleUnitExecutor.java:77)
  com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:812)
  com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:807)
  java.security.AccessController.doPrivileged(Native Method)
  com.rapidminer.operator.ExecutionUnit.execute(ExecutionUnit.java:807)
  com.rapidminer.extension.modelsimulator.operator.feature_engineering.AutomaticFeatureEngineeringOperator.evaluate(AutomaticFeatureEngineeringOperator.java:403)
  com.rapidminer.extension.modelsimulator.operator.feature_engineering.AutomaticFeatureEngineeringOperator.access$200(AutomaticFeatureEngineeringOperator.java:79)
  com.rapidminer.extension.modelsimulator.operator.feature_engineering.AutomaticFeatureEngineeringOperator$1PerformanceCalculator.calculateError(AutomaticFeatureEngineeringOperator.java:270)
  com.rapidminer.extension.modelsimulator.operator.feature_engineering.optimization.AutomaticFeatureEngineering.evaluate(AutomaticFeatureEngineering.java:278)
  com.rapidminer.extension.modelsimulator.operator.feature_engineering.optimization.AutomaticFeatureEngineering.run(AutomaticFeatureEngineering.java:198)
  com.rapidminer.extension.modelsimulator.operator.feature_engineering.AutomaticFeatureEngineeringOperator.doWork(AutomaticFeatureEngineeringOperator.java:337)
  com.rapidminer.operator.Operator.execute(Operator.java:1031)
  com.rapidminer.operator.execution.SimpleUnitExecutor.execute(SimpleUnitExecutor.java:77)
  com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:812)
  com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:807)
  java.security.AccessController.doPrivileged(Native Method)
  com.rapidminer.operator.ExecutionUnit.execute(ExecutionUnit.java:807)
  com.rapidminer.operator.OperatorChain.doWork(OperatorChain.java:423)
  com.rapidminer.operator.SimpleOperatorChain.doWork(SimpleOperatorChain.java:99)
  com.rapidminer.operator.Operator.execute(Operator.java:1031)
  com.rapidminer.operator.execution.SimpleUnitExecutor.execute(SimpleUnitExecutor.java:77)
  com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:812)
  com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:807)
  java.security.AccessController.doPrivileged(Native Method)
  com.rapidminer.operator.ExecutionUnit.execute(ExecutionUnit.java:807)
  com.rapidminer.operator.OperatorChain.doWork(OperatorChain.java:423)
  com.rapidminer.operator.Operator.execute(Operator.java:1031)
  com.rapidminer.Process.executeRoot(Process.java:1378)
  com.rapidminer.Process.lambda$executeRootInPool$5(Process.java:1357)
  com.rapidminer.studio.concurrency.internal.AbstractConcurrencyContext$AdaptedCallable.exec(AbstractConcurrencyContext.java:328)
  java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
  java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
  java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
  java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

Cause
Exception: java.lang.ArrayIndexOutOfBoundsException
Message: 1
Stack trace:

  com.rapidminer.extension.deeplearning.ioobjects.DeepLearningModel.performPrediction(DeepLearningModel.java:159)
  com.rapidminer.operator.learner.PredictionModel.apply(PredictionModel.java:116)
  com.rapidminer.operator.ModelApplier.doWork(ModelApplier.java:134)
  com.rapidminer.operator.Operator.execute(Operator.java:1031)
  com.rapidminer.operator.execution.SimpleUnitExecutor.execute(SimpleUnitExecutor.java:77)
  com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:812)
  com.rapidminer.operator.ExecutionUnit$2.run(ExecutionUnit.java:807)
  java.security.AccessController.doPrivileged(Native Method)
  com.rapidminer.operator.ExecutionUnit.execute(ExecutionUnit.java:807)
  com.rapidminer.extension.concurrency.operator.validation.CrossValidationOperator.test(CrossValidationOperator.java:800)
  com.rapidminer.extension.concurrency.operator.validation.CrossValidationOperator.access$300(CrossValidationOperator.java:77)
  com.rapidminer.extension.concurrency.operator.validation.CrossValidationOperator$8.call(CrossValidationOperator.java:658)
  com.rapidminer.extension.concurrency.operator.validation.CrossValidationOperator$8.call(CrossValidationOperator.java:643)
  com.rapidminer.extension.concurrency.execution.BackgroundExecutionService$ExecutionCallable.call(BackgroundExecutionService.java:365)
  com.rapidminer.studio.concurrency.internal.RecursiveWrapper.compute(RecursiveWrapper.java:88)
  java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
  java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
  java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
  java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
  java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
I tried tweaking everything I could to no effect. If anyone has something they can contribute I would appreciate it.
Here is the dataset used as well.

Find more posts tagged with

Sort by:
1 - 3 of 31
    User: "varunm1"
    New Altair Community Member
    Accepted Answer
    After researches on the Net, it is said that effectively "The input of the LTSM is always is a 3D array".
    Yes, I agree with this, but earlier keras has the ability to take defaults. For example, my earlier (a year back) python code below.

    def CreateLSTM():
        LSTM_model = None # Clearing the NN.
        LSTM_model = Sequential()
        LSTM_model.add(LSTM(32,input_shape=(9,1),return_sequences=True))
        LSTM_model.add(LSTM(32))
        LSTM_model.add(Dropout(0.2))
        LSTM_model.add(Dense(256, activation='relu'))  
        LSTM_model.add(Dropout(0.5))
        LSTM_model.add(Dense(num_classes,activation = 'softmax'))
        LSTM_model.compile(loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
        LSTM_model.summary()
        return LSTM_model

    @pblack476 after discussing with Philipp @pschlunder the Deep learning (Tensor) is the only way to go, as the 3D shapes required from LSTM will be from tensors.

    User: "varunm1"
    New Altair Community Member
    Accepted Answer
    @pblack476

    With the CV operator it expects a "Model" object between the train/test barrier, but the DL(tensor) operator outputs a DL-tensor-model object and it cannot continue.
    Oh yes, this seems to be an issue as the validation operators expect regular models. My only thought is to divide the dataset into 5 folds manually and perform manual cross-validation by appending 4 folds and test on one and create multiple subprocesses manually with different train and test sets and finally average the performance.

    Not sure if there is an otherway 
    User: "varunm1"
    New Altair Community Member
    Accepted Answer
    Updated by varunm1
    @pblack476 I don't think so, its a regular deep network with multiple fully connected layers and high customization capability