Friday, February 17, 2012

What's the best machine learning algorithm?

Correct answer: “it depends”.

Next best answer: Random Forests.

A Random Forest is a machine learning procedure that trains and aggregates a large number of individual decision trees. It works for any generic classification or regression problem; is robust to different variable input types, missing data, and outliers; has been shown to perform extremely well across large classes of data; and scales reasonably well computationally (it’s also map-reducible). Perhaps best of all, it requires little tuning to get good results. Robustness and ease-of-use are not often appreciated as they should be in machine learning (not to the extent buzzwordy names are, anyways), and it's hard to beat tree ensembles, and Random Forests in particular, on these dimensions.

Random forests work by generating (typically hundreds) of decision trees in a specific random way such that each is de-correlated with the others. Since each decision tree is a low-bias, high-variance estimator, and each is relatively uncorrelated with the others, when we aggregate their predictions we get a final prediction with low bias AND low variance. Magic. The trick is in getting trees trained on the same dataset to be uncorrelated. This is accomplished by using randomly sampled subsets of features for evaluation at each node in each tree and a randomly sampled subset (bootstrap) of data points to train each tree.

Put simply, if you have a machine learning problem and you don’t know what to use, you should use random forests. Here, in table form (courtesy of Hastie, Tibshirani and Friedman), is why:




Random forests inherit most of the good attributes of "Trees" in the above chart, but in addition also have state-of-the-art predictive power. Their main drawbacks are a lack of good interpretability, something that most other highly predictive algorithms do even worse on; and computational performance -- if you need something for real-time production, it could be hard to justify using random forests and spending the time to evaluate hundreds or thousands of trees.

If you are interested in playing around, grab the R package.

I recently heard the president of Kaggle, Jeremy Howard, mention that Random Forests seem to show up in a disproportionate number of winning entries in their data mining competitions. Cross-validation, I call that.


More reading:
A Comparison of Decision Tree Ensemble Creation Techniques
An Empirical Comparison of Supervised Learning Algorithms

Wednesday, February 15, 2012

Recurrent -- A python library for natural language parsing of recurring events

For a project I'm working on I needed the ability to turn a natural language phrase like "every other saturday starting next month" into iCalendar-standard RRULEs. I couldn't find a python library that implemented this, so I built it. Check it out on github.


Here are some example input phrases and output recurrence rules:


  • 'on weekdays' => 'RRULE:BYDAY=MO,TU,WE,TH,FR;INTERVAL=1;FREQ=WEEKLY'
  • 'daily starting march 3rd until april 5th' => 'DTSTART:20120303\nRRULE:FREQ=DAILY;INTERVAL=1;UNTIL=20120405'
  • 'the first and third friday of every month' => 'RRULE:BYDAY=1FR,3FR;INTERVAL=1;FREQ=MONTHLY'
  • 'once a year on the fourth thursday in november' => 'RRULE:BYMONTH=11;BYDAY=4TH;INTERVAL=1;FREQ=YEARLY'
It's an alpha release currently, so please submit any issues you find.