Sunday 16 August 2015

More negative studies and why it is a good thing

An important study has appeared in the journal PLoS One  that shows that fewer trials in the field of cardiovascular medicine are reporting positive outcomes. Why is this good new? And what does it have to do with the treatment of visual stress?

First; some background.  As many as 50% of published studies contain false positive results and the situation is particularly bad in the field of neuroscience and psychology. While some false positive results are inevitable, the numbers at present suggest some systematic biases in the literature.
For example, in the field of fMRI studies more positive findings are reported that the study designs can support. Trials have certain power or ability to detect  significant differences which depends on the sample size and variability of the population being studied. John Ioannidis has shown evidence of  bias (in the statistical sense of the word) operating. Small studies with a lower power to detect signal changes are finding as many positive findings as larger more powerful studies. Naturally, you would expect smaller studies, with less statistical power, to find fewer positive effects.You can download the study here.
So how has this state of affairs come about? Not through fraud as you might think -although that can happen.  Human nature and the subjective biases that affect us all are probably the culprit.
The first problem is that studies reporting positive outcomes are more likely to be published; so called publication bias. One of the things you have to do when reviewing the literature in a systematic fashion is to look out for unpublished material which may not have found a home because negative findings were being reported.
Another important factor is a flexible approach to data analysis. If you do an exploratory study in which you measure multiple variables, and analyse your data in multiple different ways you are quite likely to find some positive results which pass the arbitrary criterion for statistical significance of 0.05. For example you could stratify your groups multiple different ways or you could have multiple endpoints and not declare which was the primary endpoint.
This has been compared to the man driving past a barn who sees  lots of targets with arrows in the centre and assumes the farmer must be a pretty good shot. Then, as he drives a little further he sees the another wall of the barn  where the farmer is painting targets around arrows he has already fired into the wall! It's a bit like that with trials if you allow a flexible post-hoc approach to data analysis.
This may be acceptable for early exploratory studies within a field but such studies are at best hypothesis generating and the results need to be confirmed by properly conducted RCTs.
To get round this problem many funding bodies  insist that researchers pre-register the trial, stating what  what the outcome measures will be,  what subjects are going to be studied and what statistical tests will be used. This, in crude terms, is equivalent to painting the target on the wall before the arrow is fired. With this has come a reduction in positive findings and that is a good thing. False positive studies waste resources and can endanger human life.
So what has this got to do with the treatment of visual stress? Well, many of the studies of treating 'visual stress' in dyslexia show the hallmarks of a flexible approach to data interpretation, Dividing the subjects into multiple small groups and studying lots of different outcome measures. Then pouncing on those that appear positive an ignoring the rest.
Before anyone starts to take these ideas seriously we need a randomised controlled trial with pre-registered protocols and outcome measures.


No comments:

Post a Comment