A recent press release was touting a product and claiming confirming research. It actually was only a preliminary, short-term trial with a small number of horses. It hadn’t been published, and the release didn’t give enough details to even be able to interpret the results. To top it off, the condition it was supposed to treat hadn’t improved. It simply didn’t worsen.
If you’re considering a product that’s supposed to be backed by research, the first thing you need to do is ask to see the actual research. Next, ask who paid for it. It’s usually going to be the company. This is not necessarily bad but can lead to some subtle bias in how the results are described.
Research that has been presented at a meeting for peer-review or was formally published carries the most weight. Without too much effort, you can easily learn how to at least get a ”feel” for how good the research study is.
Start by reading the abstract and conclusions. This will tell you what the results were and how they’re being interpreted. The more horses involved, the better, of course, although equine research is expensive so groups are often not large.
Look for the use of a control group. This means there is a group of horses of the same type that were not treated. In lieu of a control group, studies may use what’s called a switch-back design. There will be two or more groups of horses each getting a specific treatment, then the horses will be rotated out to one of the other treatments to compare their response.
It’s not good enough to simply state something like ”there were fewer hoof cracks in the treatment group.” You need details. Did they have fewer hoof cracks or better feet to begin with’ Did they all get the same diet, exercise and hoof care’ Did they have the same environmental conditions’ The next question is exactly how many more cracks did the untreated group have’ Was the conclusion statistically significant’
In order to be a true scientific study, the results should be subjected to statistics. Statistics are simply formulas for determining how likely it is that the results are real. An example is the commonly seen ”P” value. P = Probability, specifically, the probability that the observations occurred simply by chance. The smaller the P, the less likely the results were random and the more likely the particular treatment is responsible for the results. The P must be less than 0.05 for results to be considered statistically significant. This means that there is a less than 5% chance that the results seen were caused by chance.
Reporting statistical results is an area where the author’s bias can appear. As above, if the P value is higher than 0.05, the results are not statistically significant. Period. If the goal was to prove that something was not real, the authors will say just that; i.e. ”the results were not statistically significant.”
However, if the authors were trying to show that something was true or having a real effect, they might instead say ”there was a trend for fewer hoof cracks in the treatment group” or ”there were fewer hoof cracks in the treatment group, which approached statistical significance.” They all technically say the same thing, but with different emphasis.
So, by all means look for products backed by research, but look beyond the banners to see the quality of the research.
Article by Eleanor Kellon, VMD, our Veterinary Editor.