Optimizing speed of GBT model
I am using a gradient boosted tree model to do my analysis with a lot of textual fields that are broken down from a Redshift database and used as categorical features to predict a classification of a row. Do you have any general tips or tricks for making a predictive model run faster without loosing quality of the predictions? Playing around with different tree/depth settings or configurations? Right now to read-train-run model-update database, it takes around 1 hr (for 10,000 rows), if that could be cut in half that would be amazing.