Sometimes, we are our own worst enemies. This is especially true when it comes to an innate feature of human reasoning called cognitive biases. These are biases that we ourselves construct. We create our unique subjective realities based on these systematic deviations from norms of rationality or judgment. Scientists have discovered a lot of them, and there are more always being discovered.
In psychology and cognitive science, a memory bias is a cognitive bias that either enhances or impairs the recall of a memory (either the chances that the memory will be recalled at all, or the amount of time it takes for it to be recalled, or both), or that alters the content of a reported memory. There are many types of memory bias, including:
With so many biases working against us, it benefits us to be mindful of our interpretations of reality, our families’ interpretations of reality, as well as our leader’s version of reality. Some might think that by using the scientific method, we can calibrate our thoughts in order to be better aware of these distortions. However, this too is a concern. That is because cognitive biases are found everywhere, even within peer-reviewed science itself. In this case, we will define bias as experimental outcomes artificially moved towards a favored direction. Because scientists are driven by pressures such as the competitive pressure to publish, maintaining a lab and staff, career advancement, or financial gain they can easily let their guard down and let errors creep into their papers. As you can see, this might be detrimental.
Psychologist Brian Nosek of the University of Virginia is a crusader bent on ridding science of confirmation bias, and he believes that this phenomenon, which he calls “motivated reasoning,” is the most common bias in science. That means we interpret observations to fit a particular idea (Ball, 2015), instead of fitting ideas to particular observations. A behavioral economist, Susann Fiedler of the Max Planck Institute for Research on Collective Goods, says, “Seeing the reproducibility rates in psychology and other empirical science; we can safely say that something is not working out the way it should…cognitive biases might be one reason for that” (ibid.). Science reporters Adam Marcus and Ivan Oransky, founders of the Retraction Watch website, a spinoff of the Center for Scientific Integrity, say, “Simply put, much, if not most, of what gets published today in a scientific journal is only somewhat likely to hold up if another lab tries the experiment again, and, chances are, maybe not even that” (Marcus & Oransky, 2015). Therefore cognitive biases are alive and well in the world’s scientific community.
Peer review should theoretically prevent confirmation bias from happening, but peer review can take a long time, and false or misleading information can spread into the scientific community before it is deemed misleading and retracted. This is a progress trap, and it probably wastes mountains of time and labour. At the same time, peer review is itself a large part of the problem because, as is well known in science, tenure and grants rely on frequent publication in reputable journals. This need to publish drives scientists to unconsciously select data and analyze results to satisfy journal referees (Ball, 2015).
All of these results call into question if truth is still a concern in the industry of science. If “truth” is still a concept that is useful in science, then it is our responsibility to guard our knowledge sources so that they do not become distorted. The open source nature of science fundamentally relies on sharing knowledge. An implicit condition is that this knowledge must be accurate, which is misleading. Therefore biases, both known and unknown, are detrimental to that because they compromise accuracy, and therefore (universal) usability.
A 2016 survey by Nature raises some legitimate concerns (Baker, 2016). One thousand five hundred seventy-six researchers were asked many questions. Of those surveyed, 52% said there was a significant reproducibility crisis, 38% said there is a slight crisis, 3% said no crisis and 7% said: “don’t know.” If you cannot experiment twice, you could be intentionally misleading. More
than 70% of researchers have tried and failed to reproduce another scientist’s experiment, and more than 50% have been unable to reproduce their own experiment. In spite of this, most say that they still trust the literature, which seems to ignore the point that the majority of experiments seem to be flawed due to bias.
Over 60% of the surveyed scientists reported two leading causes of irreproducibility: pressure to publish and selective reporting. There is also the situation of being stretched thin. If senior researchers are too busy with their research to mentor grad students, then grad students may start their labs and miss the mentoring from senior researchers, a pathway that is more error-prone and can lead to more experimental mistakes that increase the risk of irreproducibility (ibid.).
As you have read, cognitive biases can be seriously problematic. They can create false outcomes in data sets the world around, and have crept into every corner of our discourse, from politics to parenting. We are going to propose a way of improving our information quality and strengthening our decision-making process, but we are not done examining the questionable foundation of knowledge quite yet.