Hi,
I am currently trying to understand the output of the W-MultilayerPerceptron operator. Let us consider a toy model without hidden layers. Output might look like this.
Linear Node 0
Inputs Weights
Threshold 0.4052907755005098
Attrib O3 -0.2617907901506467
Attrib NO2 -0.05083306647141619
Attrib Altitude -0.14881316186685326
Attrib z 0.35660878655615114
Attrib sza_rad -0.44846864905805994
Class
Input
Node 0
From my understanding this should be equivalent to a linear regression. So I train a LinearRegression model with the same input data using the results from the above "MLP" as label (in order to rule out differences in the fitting algorithm). Results show that the model indeed reproduces the results from the "MLP" perfectly. The coefficients however are completely different:
- 0.0000070221 * O3
- 0.0000717637 * NO2
- 0.0004435178 * Altitude
+ 0.0003188475 * z
- 0.0040543204 * SZA*pi/180.
+ 0.0145570907
I assume that this is because of the normalization done in the MLP operator. So here's the question: Assume I want to implement the above "MLP" into my own code: How must I process my data and the results?
Thanks for your reply