Find more posts tagged with
Sort by:
1 - 10 of
101
- Support Vector Clustering (here it is !) : the ouput is not recognised as either a flat cluster or a hierarchical one
- KernelKmeans : Same problem, moreover the "neural choice" is not there in "choose the kernel type"
- FlattenClusterModel : when "performance?" is true, checking the experiment's syntax does not recognize "performance vector" produced
I am running one process for 24 hr now. I had reduced max iteration to 1000 and increased memory to 2 GB. But sill haven't got any results. I should be able to go for 3 GB since I have 64 bit machine and 4 GB RAM. Probably time to get more memory!
Is it possible to subset the data in smaller chunck and than do clustering, and combine the final clustering result?
Regards,
Vijay
Is it possible to subset the data in smaller chunck and than do clustering, and combine the final clustering result?
Regards,
Vijay
Hi,
beside increasing the memory available in total for RM you probably also have to increase the memory defined by the kernel_cache parameter. However, since prices for memory are rather low at the moment, increasing the total amount of memory is probably the most simple idea if you have a 64 bit system anyway.
Cheers,
Ingo
beside increasing the memory available in total for RM you probably also have to increase the memory defined by the kernel_cache parameter. However, since prices for memory are rather low at the moment, increasing the total amount of memory is probably the most simple idea if you have a 64 bit system anyway.
In principle yes. You could for example use the cross validation operator (with a dummy learner) for sampling by placing an ExampleSetWriter with macro option %{a} in the filename to build k disjunct parts of your data. Then apply the clustering individually and merge the results with the operator ExampleSetMerge. It might however be necessary to remap the cluster labels appropriately before.
Is it possible to subset the data in smaller chunck and than do clustering, and combine the final clustering result?
Cheers,
Ingo
Yes, that's the problem. Right now we don't have an operator for that but it would probably be a good idea to write a general operator which maps different groups on the best matching group of another attribute based on the data points in those groups. This could also be useful for cluster evaluations by comparing found clusters to predefined groups.
Cheers,
Ingo
Cheers,
Ingo
Hello Vijay, Hello Tobias, Hello Ingo,
Working with RM 4.2...
I have a problem with support vector clustering, actually with these three operators :
<operator name="analyse" class="OperatorChain" expanded="yes">
<operator name="EvolutionaryParameterOptimization" class="EvolutionaryParameterOptimization" expanded="yes">
<list key="parameters">
<parameter key="KernelKMeans.kernel_degree" value="[0.0;2.147483647E9]"/>
<parameter key="KernelKMeans.k" value="[2.0;2.147483647E9]"/>
</list>
<operator name="KernelKMeans" class="KernelKMeans">
<parameter key="add_cluster_attribute" value="false"/>
<parameter key="kernel_type" value="KernelPolynomial"/>
</operator>
<operator name="ItemDistributionEvaluator" class="ItemDistributionEvaluator">
<parameter key="keep_flat_cluster_model" value="false"/>
<parameter key="measure" value="SumOfSquares"/>
</operator>
</operator>
</operator>
Do you reproduce these behaviours ?
Cheers,
Jean-Charles.
Working with RM 4.2...
I have a problem with support vector clustering, actually with these three operators :
<operator name="analyse" class="OperatorChain" expanded="yes">
<operator name="EvolutionaryParameterOptimization" class="EvolutionaryParameterOptimization" expanded="yes">
<list key="parameters">
<parameter key="KernelKMeans.kernel_degree" value="[0.0;2.147483647E9]"/>
<parameter key="KernelKMeans.k" value="[2.0;2.147483647E9]"/>
</list>
<operator name="KernelKMeans" class="KernelKMeans">
<parameter key="add_cluster_attribute" value="false"/>
<parameter key="kernel_type" value="KernelPolynomial"/>
</operator>
<operator name="ItemDistributionEvaluator" class="ItemDistributionEvaluator">
<parameter key="keep_flat_cluster_model" value="false"/>
<parameter key="measure" value="SumOfSquares"/>
</operator>
</operator>
</operator>
Do you reproduce these behaviours ?
Cheers,
Jean-Charles.
you have at least two options:
- reduce the maximum number of iterations from 100000 to a smaller value, let's say 1000. This of course might affect the quality of the output.
- increase the size for the kernel_cache (for 20000 examples you would need about 3Gb memory for a full kernel matrix caching). Try larger values and increase the amount of memory which can be used by RapidMiner if necessary / possible. This should lead to a great speed up without loosing quality.
Cheers,Ingo