Is it possible to download the Inputs selected by automodel and their corresponding parameters
varunm1
New Altair Community Member
Hello,
I am working on automodel for my data with 77 attributes. I am trying to get all the details of attributes (Columns) analysis done by automodel (Correlation, ID-ness, Stability and Missing Values). Is it possible to download this data showed by auto model into excel or any other file format?
One more question is what is the "?" in ID-ness column in automodel.
Thanks,
Varun
I am working on automodel for my data with 77 attributes. I am trying to get all the details of attributes (Columns) analysis done by automodel (Correlation, ID-ness, Stability and Missing Values). Is it possible to download this data showed by auto model into excel or any other file format?
One more question is what is the "?" in ID-ness column in automodel.
Thanks,
Varun
Tagged:
0
Best Answer
-
Hi @varunm1,Martin is right, we currently do not have any way to export those numbers since there is no operator for this.> One more question is what is the "?" in ID-ness column in automodel.ID-ness is only calculated for nominal columns and integer columns (real numbers are rarely used as IDs anyways). The rational for this is that most real-valued columns would otherwise be (falsely) identified as IDs since it is very likely that they show 100% ID-ness.We of course could still calculate the ID-ness nevertheless, but found it in some usability tests that people are then confused by the inconsistency that some columns with 100% ID-ness (nominal ones) are excluded by Auto Model while others (real-valued ones) are not. So we decided to not calculate the ID-ness for real-valued columns at all to avoid that.(...and yes, this confusion can still happen for Integer columns but here a 100% ID-ness actually IS more frequently an actual ID and people just accept that correct handling without questioning it )Hope this helps,
Ingo2
Answers
-
-
Hi @varunm1 ,have a look at Extract Statistics in Operator Toolbox. It gives you all the statistics of the normal Stats view. You can join this with Weight by Correlation. Then you already have two.BR,Martin1
-
Hi @varunm1,Martin is right, we currently do not have any way to export those numbers since there is no operator for this.> One more question is what is the "?" in ID-ness column in automodel.ID-ness is only calculated for nominal columns and integer columns (real numbers are rarely used as IDs anyways). The rational for this is that most real-valued columns would otherwise be (falsely) identified as IDs since it is very likely that they show 100% ID-ness.We of course could still calculate the ID-ness nevertheless, but found it in some usability tests that people are then confused by the inconsistency that some columns with 100% ID-ness (nominal ones) are excluded by Auto Model while others (real-valued ones) are not. So we decided to not calculate the ID-ness for real-valued columns at all to avoid that.(...and yes, this confusion can still happen for Integer columns but here a 100% ID-ness actually IS more frequently an actual ID and people just accept that correct handling without questioning it )Hope this helps,
Ingo2 -
Hi, I would like to add my 5 cents here as I also would like to have the ability to access the details about variables quality used in auto model. Here's my current use case:
- I have a dataset with 450+ attributes.
- I start with auto model just to get the feeling how data is structured and what modelling capabilitiesd are there 'out of the box'.
- Auto model checks inputs quality metrics which results in removing around 300+ 'bad' attributes, so I am left with diminished dataset having only quality attributes.
- From here, I would like to continue with the diminished dataset and perform further feature selection outside auto model.
- Ideally here I would like to have an operator which would detect all attributes with IDness, stability and correlation above certain configurable thresholds, so I can execute this in the scope of a separate modelling process (not within auto model) and also have access to all quality metrics of attributes.
3 -
I am sure you could Well, here is your open mic
2 -
indeed. Matter of fact, here's what I'm going to do. I'm going to "open" the Product Feedback and Product Ideas categories for new discussions for all moderators (which is all unicorns + RapidMiner people etc). So @kypexin you can skip a few steps and simply post at will. The only thing I ask is please also "tag" your posts as either "Bug Report" or "Feature Request" when you do. I will post the direct links in the Unicorn Stable in a couple minutes.
Scott
5 -
Hi,I have a similar problem, but with 1000 attributes. I have checked the underlying process of automodel, and it simply filters the bad attributes out by name using a Select Attributes operator (in Process -> Preprocessing -> Remove Columns?). This feels a lot like a black box to me . . .I hope we get the operator and the fixed automodel process soon!Edit: A workaround is to use the automodel process and only keep the preprocessing part:Regards,Sebastian1
-
Sorry, but...This feels a lot like a black box to me . . .How is this a black box? We show you the table, you select the set of attributes you want to use with checkmarks, we select them. It can't be less black-boxy in my opinion0
-
HI @varunm1,I think the main difference is that ID-ness means that all values are distinctively different (like 1, 2, 3,... or mostly all different nominal values), while stability means that nearly all values are the same. Similar concepts, but not exactly the same.CheersJan2
-
Thanks, Jan. Yep, they are quite similar but not the same. I am trying to understand how both of them are helpful in Automodel when selecting attributes (we can set a condition based on one of these) or is there any other use for this?0
-
Hi @IngoRMI meant that when I wanted to see what Automodel was doing in the background, there was nothing on the process that pointed out how these quality measures were being calculated. We can discuss whether it is a black box or not, but I think we can agree on that this is not desired.Regards,Sebastian
0 -
Ok, let's indeed not getting into semantics hereThe issue is, that we cannot actually do this via operators alone, at least not in all cases. The reason is that the ultimate decision what to include has been made by the user, not RapidMiner. Auto Model shows only recommendations (the traffic lights), the user decides to follow them or not. Especially in the case that the user want to add a column which would have been recommended for removal, I do not see an easy way to achieve this via operators (of course it possible by keeping all original columns in an extra data set, use an operator selecting based on the recommendations, select the ones which should be kept despite the recommendation from the original set and join them back together, but, you know... that does not really seem to be justified here...). Hope this better explains why we show the recommendations, the user makes the selection, and we simply apply the selection.One improvement I could think of is to add the reasons for the recommendation in the annotation of the Select Attributes operator, at least for all attributes where the user followed the recommendation. But that again would not explain the cases where the user does not or where the user has a different reason for (de-)selecting...To be honest, I personally think it would be best to keep this as is and if this is important, you can always annotate yourself. With the upcoming deployment offering of RM, there will also more ways of adding annotations to models which could be used for that...Hope this helps,
Ingo0 -
Interesting discussion about the black box topic, but let's not lose track of the original suggestion here, which still has merit in my view.
Regardless of the selection method used (and I can see the arguments for leaving it the way it is in Automodel), the current Automodel process calculates a value for each attribute for 5 quantities: correlation, id-ness, stability, missing, and text-ness (that's a new one!).
It would be nice to have an operator which generated these same values inside any process and provided the results as a dataset. You could then use that operator to create filtering/weighting/selection rules of your own choosing based on whatever threshold values you wanted. Currently you can do that for things like missing value percentage or correlation (because there are operators that can be used to calculate those) but not for the others (as far as I know). So there is still a gap in the capabilities of Automodel vs non-automodel processes.
2 -
@Telcontar120 That I fully agree with! We have it on our list anyway, I just wanted to manage expectations that this likely won't change the processes generated by AM, that's allI am trying to understand how both of them (ID & Stability) are helpful in Automodel when selecting attributes (we can set a condition based on one of these) or is there any other use for this?Well, in general this kind of thing (constant columns or ID-like columns) are something I pay some attention to, not just when building models but when I work with data in general (e.g. for creating visualizations). But for modeling, it just makes ton of sense to exclude them since it will make your models faster and likely better. Hence the recommendations in the Select Inputs step. A bit more detail: ID-like attributes are typically not helpful because you cannot really generalize from these columns, i.e. there is nothing to learn from if all (categorical) values are different. And they can be really problematic for entropy-based learners like decision trees. Stable columns typically do not hurt much (a little bit for distance-based learners but...), they are just not necessary and slow things down.Hope this helps,
Ingo1 -
Sorry, If my question is not clear. Doesn't 0 percent stability mean 100 percent ID? so what I am thinking is are two of these measures necessary or having stability measure is enough for the model and we can decide based on that?0
-
Ah, sorry, now I get you.Doesn't 0 percent stability mean 100 percent ID?Yes, but you can also have let's say 50% stability and still 100% ID-ness (e.g. with a data set of two rows with two different values). So since not always "stability = 100% - ID-ness" is true we simply show both values and base the recommendation on each of those individually...Hope that helps,
Ingo1 -
see Extract Statistics in Operator Toolbox. It gives every one of you the estimations of the conventional Stats see. You can join this with Weight by Correlation. By then you starting at now have two.
3 -
great thread - what's the basis of the new quality measure "text-ness"? thanks!1
-
Hi,Text-ness (can somebody please suggest a better name? ) consists of three things:
- the id-ness
- fraction of cells in the columns which have more than one token (word)
- average length of cells in the column relative to 75
Those three values are then averaged for the text-ness. So if the values in the column are all different, all consist of multiple tokens, and are all 75 characters or longer you would get text-ness of 1.While certainly not perfect, this simple heuristic manages relatively well if columns are identified as texts vs. regular nominal columns.Hope this helps,
Ingo6