Random thoughts on the superbowl and statistics

Being an avid fan of the NFL, I do read a lot of the articles about the upcoming big game, such as this one http://sports.espn.go.com/nfl/playoffs07/news/story?id=3208171

This one caught my eye due to the similarity in how we do our work.  They ran 10,000 simulations of the Patriots v. Giants to come up with a prediction for who would win.  Why run 10,000 — why not just 1 simulation?  Well, there can be lots of fluky things that happen just by chance in football, so one simulation might not be very predictive.  In fact, there are so many possibilities that even 10 or even 100 might not capture the diversity (although I bet 1000 might be ok).

This is very similar to our work, as how a protein folds (or misfolds) can’t be captured in a single simulation.  We often run 10,000 simulations to capture the diversity and complexity.  Also, there is luck involved here as well, as fluky things can happen in folding as well (something may fall into a trapped configuration or fold correctly early by chance).

It’s neat how games (such as Madden 2008) are becoming more like full fledged simulations, with a lot of real-world detail, and real world simulations (such as the molecular dynamics we do in FAH) are running on game machines!