ArmCare FX—The Latest Advancement in Baseball Performance Science For 20 years, I have been reading research to advance throwing and hitting performance and to be honest... I've had been wasting my time! Like most people communicating findings from studies, I put a lot of stock in "statistically significant" data, thinking it was "practically significant" data. About five years ago, I changed my tune and started reporting the most critical metric in science in my publications, as I want my work to be relevant, meaningful, and applicable on a large scale. This metric is called the Effect Size measure (ES), which can be calculated in several ways depending on two samples' means, standard deviations, and population sizes. EFFECT SIZE IS THE ONLY SCIENTIFIC FINDING THAT MATTERS IN INTERVENTION-BASED RESEARCH In a nutshell, the ES calculation puts all studies on an even playing field as it normalizes the distance between two means. Given the sheer fact of having a lot of people in an investigation, the chances of having significant findings are high. When we stay statistical significance, it means that a probability value of less than 5% occurred when hypothesis testing for differences between two means. Essentially, that means there was a 5% chance or less for a Type I Error occurring because of an intervention. In plain English, this means that the difference seen between two samples occurred due to an intervention with only a 5% chance or less of being a random occurrence or a false positive. I always find it interesting when studies produce p-values above it, a p-value of 10% (p=0.10). That means that you have a 90% chance that change occurred due to an intervention and not randomized effects. To me, that's pretty good, but in the research world, a study reporting over a p-value of 0.05 essentially tells you there are no actual differences related to the intervention. So now we have statistical significance defined, let's get back to ES, or what is expressed at times as the Cohen's d statistic, or d statistic. Once you identify statistically significant data (probability value of making a Type I Error) is less than 5% or 0.05, it's time to see if the findings are meaningful in reality. Sports medicine has always used ES calculations to determine the impacts of treatment, and sports science must also focus on this measure. In fact, if you try to publish in the Journal of Strength and Conditioning Research, your paper will not be accepted without reporting your ES values. This research requirement is excellent news, as we cannot rely on statistically significant data with minimal effects, and unfortunately, we are professing those findings throughout baseball. EFFECT SIZE CALCULATIONS ARE THE SAME AS SCOUTING ON A 20-80 SCALE I love ES calculations because it's on the same scale as scouting. Scouts use a 20-80 scale, with 50 being average. The Effects Size Scale goes from 0.2 (minimal effects) to 0.8 (large effects). Some studies will show calculations above 0.80, which is great to see—those are the hall of fame studies. That means the normalized distance between means is considerable. As a result, the difference caused by an intervention is highly translatable to the practicality of coaching as the outcomes could be advantageous for most athletes. |