Saturday 28 March 2015

Scientific snobbery?

Unfortunately we’re seeing more and more scientific snobbery of late. Some academics believe that if research is not randomised, masked, double-blind, and a plethora of other terms that no one outside of academia fully understands, then it can be dismissed out of hand. These are nearly always the same academics who insist research is peer-reviewed a hundred times before accepting any findings. We find this is especially true when they are inclined to disagree with the conclusions, but oddly enough, they are also invariably reluctant to review the research themselves.

This quote comes from a blogger who criticises a publication in British Medical Journal which calls on UK dyslexia charities to present a balanced view of the evidence for coloured lenses and overlays (1) (pubmed link).
 It's good that people are debating these issues. However, demanding high quality research is not scientific snobbery and a failure to adhere to these principles can cost lives and waste resources. 
The first claim the author makes is that some academics  'dismiss out of hand any evidence that is not randomised, double blind and a plethora of other terms that no one outside of academia fully understands'. Not so; sometimes double blind randomised controlled trials are not possible and then you have to use the best available evidence, but it is always better to be aware of the shortcomings of the data. 
When well conducted double blinded RCTs are possible, they are the best form of evidence available. The figure below which looks at different trials of a surgical treatment for hypertension illustrates this. To the left are non randomised and non 'blinded' studies and to right more methodologically rigorous studies. It can be seen that the treatment effect gradually disappears as trial methodology become more rigorous. A lack of 'scientific snobbery' would have resulted in more people being exposed useless and potentially dangerous surgery.



There are numerous other examples in the literature of effects found in small scale observational studies disappearing when tested by properly conducted RCTs. This matters. People die, time is wasted and money is squandered because of ineffective treatments that have not been properly tested.

The author goes on to say These are nearly always the same academics who insist research is peer-reviewed a hundred times before accepting any findings.
Not exactly, two good peer reviews are usually enough for most journal editors. Peer reviewing a paper is hard work - I know because I sometimes have to do it. I find that I have to read a paper multiple times, make notes and 'sleep on it' before any flaws, if present, emerge. Despite this, I have missed defects in some studies. Failures of the peer reviewing process have made it easier for proponents of the treatment of visual stress to ameliorate reading difficulties, to claim that it is scientifically validated. For example, in my review of the 2002 paper by Bouldokian and colleagues(2) I highlight how the authors were allowed to get away with an extravagant claim. They argued that although the study was not masked, so both the experimenters and the subjects knew which was the experimental overlay and which was the placebo, they were in effect able to calibrate the placebo effect for each intervention and ensure they were matched; meaning that any difference seen could be attributed to colour. This is an outrageous claim that should perhaps have been picked up by the peer review process. If you really could measure and manipulate the placebo effect in each arm of a trial obviating the need for masking, that would be worthy of a Nobel prize. 
So, peer reviewing is important but it is not the end of the story; errors are bound to creep through and appear in the published literature. After all, peer reviewing is just what is it says, assessment of a paper by one's peers. The mere fact that a study appears in the peer reviewed literature does not mean you should always take the findings at face value. The next stage is critical appraisal of the paper and responses to the findings either through letters or online forums. 
Finally, there may be many studies out there in different journals with different findings that can be combined and analysed - so called meta-analysis. 
'a plethora of other terms that no one outside of academia fully understands'
I do not agree that no one outside academia is capable of understanding this process. Anyone who is capable of putting a bet on the 2.30 at Kempton Park has the ability to critically appraise much of the literature on coloured lenses and overlays and better than some of the peer reviewers themselves. An understanding of odds ratios is very helpful in evaluating research. 
Finally,  the blogger goes on to say they are also invariably reluctant to review the research themselves.
Again, not true. Please see my reviews of the some the key papers in my posts of March and February 2015. There are more to come. There are two forms of intellectual laziness. One is to uncritically accept the findings of these studies and the other uncritical rejection. The one is no better than the other.


1. Henderson LM, Taylor RH, Barrett B, Griffiths PG. Treating reading difficulties with colour. BMJ. 2014 Aug 19;349:5160–5160.

2. Bouldoukian J, Wilkins AJ, Evans BJW. Randomised controlled trial of the effect of coloured overlays on the rate of reading of people with specific learning difficulties. Ophthalmic Physiol Opt  2002 Jan;22(1):55–60.

No comments:

Post a Comment