Wolfram|Alpha is a “computational knowledge engine” which is supposed to make a large variety of databases available through simple text query. The idea is that instead of digging through many databases, have one big database that can combine disparate results.
As with anything so ambitious, WA has collected plenty of criticism, especially because of the opaque natural language interface. Why does “life expectancy of 30 yo US woman in 2002″ work, but not “life expectancy of 30 yo US woman from 2002-2003″? Farhad Manjoo at Slate has a great article showing how straying from the example queries leads to the infamous “Wolfram|Alpha isn’t sure what to do with your input” response. SomethingAwful has put together a collection of ridiculous queries including
how much money BloodRayne made per millisecond.
The WA health examples demonstrate a number of problems which makes me not trust all of the results. Their example of “Mayo Clinic, Olmsted Medical Center” is supposed to compare two large medical centers in Rochester, Minn. However, it actually compares the Mayo Clinic satellite in Jacksonville, FL, with Rochester. Even that apples-to-oranges comparison is hampered because there is no data in WA for Mayo in Jacksonville. Try finding data on Mount Sinai Hospital in Miami — the only Mount Sinai that WA admits to knowing is in New York City (and has no affiliation with the one in Miami.)
WA says that for a 50 year old with resting HR 60, the maximum HR is 180 and the target exercise HR is 132-156. However, the AHA says that target rate is 50%-80% of the maximum, and that a 50 year old’s maximum is 170. Maybe WA’s numbers are better, but it’s tough to tell without references or justification.
WA says it calculates heart disease risk based on the Framingham study, but I get different results. (Assuming LDL 111, HDL 54, BP 120/80, nonsmoker, not diabetic.) Using the male score sheet from Wilson, Prediction of Coronary Heart Disease Using Risk Factor Categories. Circulation 1998 97 (18): 1837-1847., I get 6%, versus WA’s 4.6%.
As the score sheets just return whole numbers, WA is likely using the Framingham model which is discussed in the paper. However, even using that I get 5.4%, a solid 0.8% more than WA’s result. (for sticklers, my work is after the “more”.)
I would encourage anyone who uses Wolfram|Alpha for medical decisions to run the numbers on their own. I would like WA to be more explicit about where the data is coming from and how results are derived.
See Appendix 1 of the Circulation paper for background.
Assuming LDL 111, HDL 54, BP 120/80, nonsmoker, not diabetic:
L_LDL = 0.04808 * 50 + (-0.0471) (for the HDL of 54). (No additions for normal BP and LDL.) So L_LDL = 2.3569.
G_LDL is 3.00069 (this is the "Linear predictor at risk factor means.")
A = L_LDL- G_LDL = -0.64379
B = exp(A) = 0.525298
P = 1-s(t)^B; s(t) = 0.90017 for LDL male, so P = 0.053748 or 5.4%.