Skip to main content
ARS Home » Midwest Area » Urbana, Illinois » Global Change and Photosynthesis Research » Research » Publications at this Location » Publication #397898

Research Project: Optimizing Photosynthesis for Global Change and Improved Yield

Location: Global Change and Photosynthesis Research

Title: To have value, comparisons of high-throughput phenotyping methods need statistical tests of bias and variance

Author
item McGrath, Justin
item Siebers, Matthew
item FU, PENG - Harrisburg University
item LONG, STEPHEN - University Of Illinois
item Bernacchi, Carl

Submitted to: Frontiers in Plant Science
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 12/20/2023
Publication Date: N/A
Citation: N/A

Interpretive Summary: New measurement techniques are constantly developed. To test how well new techniques perform, they are often compared to current, commonly used techniques by using experimental designs and statistical analyses tailored for method comparison. In the submitted manuscript we show that the most commonly used statistical analysis is flawed and can reject a new, more precise method in favor of an old, less-precise method. We present alternative analyses that are simple and do not have this flaw. The interested audience in this new statistical method is broad, since all scientific fields perform these experiments. The manuscript describing the flawed statistical analysis has been cited more than 50,000 times and is one of the most highly cited manuscript. Thus, the improved analyses presented could potentially be valuable for many researchers.

Technical Abstract: Method comparison is a cornerstone of scientific improvement. Although quantification of Pearson's correlation coefficient (r) is commonly used to assess method quality, Bland and Altman demonstrated that it can be inappropriate, and introduced bias and limits of agreement (LOA). This approach is extremely common, and the manuscript is one of the top 50 cited papers of all time. However, the LOA approach is overly conservative and may result in falsely rejecting a better method. Here we show that using LOA leads to incorrect conclusions when testing new light radar methods and when reanalyzing the original Bland and Altman data set. An alternative approach, statistically comparing variances of methods, requires repeated measurements of the same subject, but avoids incorrect rejection of a better method. Variance comparison is arguably the most important component of method validation and, thus, when repeated measures are possible, comparison of variances should be used in favor of LOA. Statistical tests to compare variances presented here are well established, easy to interpret and ubiquitously available.