Friday, November 1, 2013

Say it ain't probabilisticly so


The above image is from a featured article in The Economist. The article pointed to a substantial rate of publication in the scientific literature of false positives (thinking something is true when it is not) and creative or misleading correlations.   
Academic scientists readily acknowledge that they often get things wrong.  But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further.  Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question.  There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think.  
The Economist attributes the rapid publication of potentially wrong results to political interests of scholars needing to advance their career.  
Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems.   
There is of course something regretful to be said for the quality of research that is at times published.  Still, that the information is published at all is not inherently a problem. 

The pressure to publish, though stressful, often has scientists working within a given area of thought and controversy rather than wandering about the knowledge world.  Areas of controversy often indicate that the resulting scientific consensus is meaningful for decision making out in the 'real world.'    



xkcd comics (xkcd.com)
Thus, every article must be placed into the context of an ongoing discussion.  When they are, surprising or questionable results are easier to spot.  

The problem arises when individual research papers are taken out of debate and social context and readily applied in society for decision making with large scale effects.  And as The Economist briefly mentions, this occurs regularly.


Consider for instance, that publication in a peer reviewed journal is the minimum requirement for use of a scientific finding in the creation of catastrophe models for ratemaking in Florida.  Even if the study is fiercely debated, it remains fair game for use.  The decision to pick a paper out of context and use it for ratemaking has far reaching consequences  socially, politically  and economically.  


As a resolve to this issue, The Economists seeks to revamp the peer review process applied to published work and demands greater replication in scientific studies.  


The idea that the peer review process leaves something for wanting is not new.   In this way, The Economists can get into a long line of interests that wish the published science said something other than what it does.  Certainly room for improvement is always possible.  But this topic is itself vast with a substantial context, so I won't go into it much further.


The need for replication may be an important alternative in studies that involve clinical trials and biomedical science (which the article heavily focuses on).  Outside of those fields though, scientific studies very often use predictive modeling.  Given the same model, replication of results is obviously possible, probable, and certain.  Amongst models with similar assumptions replication is also possible.  Unfortunately, in this type of research replication does not equate to truth about the future.  Therefore, replication is not always a suitable response to the problem of questionable research results.  It's most likely a discipline specific solution.   


Again, where decision making requires reflection on the state of scientific knowledge looking to where scientists agree and disagrees is more promising than relying on any one paper, peer review process, or ability to replicate.    

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.