Why scientists seem to change their minds (1)
Approximately once a year, I get into an argument with my father about the reliability of scientific evidence. My dad likes to tell me that scientists are always getting it wrong and, therefore, scientific knowledge should not be put on a pedestal above other forms of knowledge. It can certainly seem that scientists are constantly backtracking but I would argue that this is more to do with the imperfect humans (whose values and beliefs influence how they do research and how they interpret scientific findings) rather flaw in the scientific method per se.
The reason that I believe we need to use scientific methods to evaluate things is that human beings are extremely susceptible to prejudice, group think, placebo effects, confirmation bias and a whole host of other factors. This means that some of the things which we strongly believe to be true, are in fact not! Scientific investigation attempts to overcome some of these effects to get a more objective view of an issue. A key tenet of the scientific method is that results must be reproducible given the same conditions. If a finding cannot be reproduced, it is not scientifically proven.
So if this is the case, why is it that scientists always seem to be changing their minds?! Well, there a number of reasons which I will outline in a series of blog posts starting below. The important thing to bear in mind is that the scientific method attempts to give objective answers to specific questions. Scientists are not perfect and therefore at times human subjectivity creeps in. But rather than using this as a reason to reject science, I suggest we concentrate on ways to improve the scientific method but also to consider how it can be used to compliment other forms of evidence.
Reason 1: The conditions have changed
The scientific method allows us to test whether an intervention works, in given conditions, better than a control intervention. So for example, some research may demonstrate that a new painkiller reduces the severity of headaches better than a similarly administered placebo in a group of women between the age of 18 and 30. Providing that this finding is reproducible, it is scientifically proven that this pill achieves the outcome of interest in these conditions. However, this does not prove that this painkiller has the same effect in other conditions. For example, the research does not tell you whether the painkiller performs better than a placebo in reducing back pain in elderly men or reducing toothache in children. You may hypothesise that it is likely to do so, but you would need to carry out more research to demonstrate if this is true. Similarly, when researchers ‘model’ a situation they define the exact conditions of the model. The results of such a model are true only if the assumptions (conditions) that they have defined are also true. This can be clearly seen in the economic models which failed to predict the recent banking crisis. In fact the results of these models may well have been correct for the conditions they used, the problem is that these conditions did not reflect the real world adequately. Certain key assumptions (for example that bonds based on sub-prime mortgages were relatively safe investments) were fundamentally wrong. For this reason, the results of the models, while correct for the conditions assumed, were, in fact, not useful for predicting the future.
See Reason 2: They didn’t ask the right questions tomorrow! Follow Kirsty on Twitter @kirstyevidence.
Well thought out and put down, Kirsty. Something that I often think about, which seems related to the point you have discussed here, is why scientists sometimes are completely silent about the conditionality of their results. It so happens that, as you have rightly noted, scientific results generally hold true only for a specific set of circumstances but no more than that. However, it’s intriguing that in a lot of research publications this caveat is never even mentioned. Thus, the reader is left with no option than to assume that the researcher is making a ‘blanket’ claim. Another side of the story is the common practice in the journalistic community of using catchy captions when reporting research results. For instance, one is likely to read a headline like ‘one shot of vodka enhances creativity, new research has found’ and a story that says nothing about the assumptions that the researchers actually made. In this case, the suspicious reader lashes out at the scientists whose fault it is not that the science journalist has reported that way.
These are thoughts/provocations rather than comments; please feel free to ignore!
– Arguably Science is a framework (i.e., the ‘ways’ and ‘means’) that we generate knowledge (as an ‘end’), and so the notion of scientific knowledge being higher/lower than other forms of knowledge is probably not considering the right question; unless of course we’re talking about knowledge pertaining to those ways and means of science (both declaritive and procedural knowledge that considers the why, when, how we ‘do’ science)
– I understand the sentiment behind the ‘objectivity’ aim, though one could propose that you actually mean that it is done in a rigorous manner to reduce any forms of bias. If you use that instead, then it also allows one to undertake qualitative research and/or collect subjective data… These can still be done in a rigorous manner to reduce bias.
– Your point about reliability is again on the right track, but sadly few questions are replicated (especially in social research). However my real point on this topic is that arguably (if using a Popperian stance on the problems on induction) that we cannot ‘prove’ any hypothesis, merely support it (as someone might falsify it to some degree at some point in time – as has happened with various scientific ‘laws’. This is quite an important distinction and it might trip you up in your example. What if the drug study is replicated and the effects are the same. Is this then proven? What if it’s replicated another 5 times; 1 study produces no effect, whilst the other four show the opposite? The key here is that the data only support a hypothesis (to a certain degree of confidence), and that one thing we really need to do well is evidence synthesis -this often gets
forgotten in those that have a close affinity to reductionism….
– The other blog on indigenous knowledge is certainly interesting. It would be interesting to consider how this as an area overlaps with the development of expertise, including implicit/tacit knowledge developed via years of practice.
Happy to discuss!
G