Select Page
The Daily Dose • Sunday, April 29

Research Statistics for the Non-Statistician

By Amanda Decimo, Nurse Anesthetist, from the IARS 2018 Annual Meeting*

Yesterday’s panel discussion helps non-statistician clinicians prepare their research for the daunting task of statistical review. Dr. Edward J. Mascha, PhD, panel moderator, highlights The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines that aid researchers, biostatisticians, and editors in reporting basic statistical methods and results. He describes how statistical errors are prevalent throughout medical literature and usually involve basic statistics.

Today’s medical research uses electronic medical records (EMR) and big data sets involving richer and more complex data. These data sets call for more careful analyses and review by experienced statisticians. Research shows that published statistical guidelines by a journal do not significantly improve statistical reporting. However, a dedicated statistical journal reviewer evaluating journal submissions does improve statistical reporting quality.

Dr. Mascha describes how to improve study design for randomized control trials and observation studies. Key reporting errors include: failure to determine primary and secondary study outcomes from the start, failure to include not only p-values, but also confidence intervals with treatment effect, and failure to control for confounding within observational studies. Dr. Mascha also reminds us that non-randomized studies need to make associations and not claim causation.

Clinician researchers are advised on some key points to improve the statistical review process: collaborate with biostatisticians, follow Anesthesia & Analgesia statistical reporting guidelines, consider confounding as a beast to be tamed, and to own their manuscript limitations.

Thomas R. Vetter, MD, MPH describes 15 common mistakes in clinical research. Failure to examine and assess similar, preexisting literature is a common mistake. Researches need to understand what previous studies did correctly and whether their current study is an extension of previous studies. Another mistake he highlights is the failure to examine the normality of data, particularly with continuous data. How is the data determined to be normally distributed? Too much statistical evidence is better than not enough.

Errors in data analysis include choosing the wrong statistical test. Dr. Vetter gives the example of a study of sequential blood pressure measurements performed on the same patient. The use of a series of t-tests between sequential measurements is often performed in these types of data when ANOVA is the correct statistical test. Documentation and presentation of data often include reporting errors. P-values should be reported, but also included with point estimate and confidence intervals. Dr. Vetter also warns about studies using composite outcomes (i.e., with a broad net of outcomes). While they are easier to prove a positive effect, they are not always accurate.

Won Ho Kim, MD, PhD, further describes how SAMPL guidelines help researchers set up primary analysis, primary and secondary outcomes, type I and type II errors, adequate sample size, and set up the correct statistical methods to fit their study data.

Dr. Kim concludes the discussion with several key take away points. He urges reviewers to use SAMPL guidelines, but also to seek out guidelines for specific statistical analysis. Use enough detail when describing statistical analysis. Draw conclusions specific to the study hypothesis. Rather than accepting the null hypothesis, it is better to state “there is no significant difference between groups.”

The A&A regularly publishes basic statistical tutorials to help members of the IARS community avoid these statistical pitfalls.

*Coverage of the Panel session, Research Statistics for the Non-Statisticians