Just to clean up some language: First, epidemiology is not "the study of injuries and the associated pathology." Mr. Howell's definition remakes the field into something closer to clinical pathology, focused on the individual. Not unreasonable from an engineer who views things from an experimental perspective. But epi is the study of the distribution, causes, and control of disease in populations. Various textbooks may use terms like "patterns" instead of "distribution," and for a while now, injuries have been included under the rubric. But epi, and the biostatistics that accompany it, must study groups of people, often large ones.
The difference is significant, because if I am a member of a cohort that is at higher risk for heart disease, for instance, that doesn't mean that I inevitably get heart disease, or that some causal factor such as trans fats in my potato chips will be what fells me. Put another way, epidemiological risk is about the odds of something happening, and the forces driving that cause can be bewilderingly complex. A correlation that may (or may not) signal risk is not an individual cause. This is the reason that a lot of expert nutritional advice, for example, gets reversed down the line. Eggs get you because they have high cholesterol. So cholesterol is the cause. Then, OK, eggs get some people. Others in the same group of egg eaters seem fine. So cholesterol sensitivity, on a genetic basis, is the cause. Then, well, it isn't the cholesterol maybe, it's another substance in the yolk. That is metabolized into something inflammatory by some gut biota in some people. Or maybe it's both. Check back.
So: Skis do not cause ACL injuries, then. Certain classes of falls increase the risk of ACL shear. For the most part, we are still unclear exactly why some folks' ACL's shear in this kind of fall and others don't. May be individual variation in knee biomechanics, may be slight differences in the exact fall, may be other host factors such as sex hormones that alter tendon and perhaps ligament function.
Then: Certain classes of skis may (or may not) increase the risk of those kinds of falls in certain groups of people. Get the difference? Shaped skis aren't like death and taxes. (Whatever "shaped" means; any ski ever made has a sidecut. Is 15 m the cutoff between happy knees and catastrophe? Any other ski attributes figure in?)
Second, Mr. Howell's statistical explanation sounds very sophisticated, but it approaches gobbledegook in places. The whole business of weighting, for instance, is fraught with problems of classification bias, even when purely numerical methods are deployed, and in any case, you don't "weight" a frequency or a degree of severity unless you're just being totally arbitrary. In a form of analysis called a multiple regression, for example, you can use standardized beta coefficients (which loosely correspond to the slope you learned back in high school graphs of a line) that accompany each independent variable as exploratory weights. But these approaches are completely dependent on which variables you put in, and the weights can shift dramatically if you pull one variable for another, or use two variables that explain similar things (a problem called collinearity). The old saw, "garbage in, garbage out."
Third, Mr. Howell's opinions on which studies to trust, and why bigger samples are better, is misleading. A small, well designed study may yield superior results to a big, poorly designed study. Case-control models are often useless for many problems; we worship them because we associated them with "House" and pop medical rehashes by journalists. Epidemiology rarely uses case control because it's often impossible to define a real control group. The recent articles about how people who eat nuts have lower risk of heart disease is a good example. Are the nuts then the cause? Well, the problem is that people who eat nuts also tend to differ in many other ways from people who don't. It's virtually impossible to find a control group. Similarly, the incidence of ACL is not uniform across age, sexes, by skill level, by state of physical conditioning, and all those other host factors. So how do we construct a control group retrospectively? After the blown ACL?There are statistical approaches to detect causality in such situations such as path analysis, but it's also quite sensitive to the selection of variables, even their order of entry. Figuring out how to enter variables, in fact, is more art than science. You try a whole slew of favorite algorithms and pick the one that seems to favor your hypothesis, typically.
Fourth, statistical "power" does not mean the same thing as saying "Oh, he's very powerful." Statistical power is the ability to detect small differences that are in fact real. It is associated with sample size, but also with a lot of other sample qualities. And it doesn't mean that the real difference has any biological meaning, just that the difference is there. Television ownership is highly correlated with height. A bigger sample of owners allows us more statistical power to discern how many televisions produce how much growth. So there's a real difference.
Only, uh, hmmm. How does that work biologically? Well, turns out that there's a so-called hidden variable. Income allows us to buy more TV's, and also more and better food for our kids. Oops. There goes that paper with the sample of 100,000 showing that TV's cause ACL...(in fact, bet off the top of my head that there's a correlation between hours of TV watching and risk of ACL)