We commonly think about science as the exemplary endeavor of rationality. Rationality can mean several things, but here I have in mind the fact that the whole scientific endeavor is transparent in all its aspects. Science is praised in public because every single aspect of its practice can be inspected and judged according to robust standards. However, on closer inspection, scientific practice relies on a variety of factors that severely limit rationality. Of course, I am not saying that science is irrational. Rather, the rationality of science depends heavily on its transparency, and in fact, several aspects of the practice of science are quite opaque. Far from being fully transparent – as though a single scientist could directly verify every scientific claim – science actually relies deeply on trust. Scientist X has to trust another scientist Y because most of what X does depends on what Y has done. This is the idea of epistemic trust. Let me be more precise about this.
In order to understand the notion of epistemic trust, we have to first understand the notion of epistemic dependence. One is epistemically dependent upon another, as explained by Susan Wagenknecht, when “the former cannot acquire and/or create knowledge independently of the latter.” This is the rule in collaborative research in science. Because all contemporary science is collaborative to some degree (even an individual scientist relies on the previous work of others), then it turns out that epistemic dependence is an essential condition for any scientist. A person cannot assess every piece of evidence directly – she has to rely on what other people have said.
Epistemic trust is thus central to being a scientist. But how do we know whom to trust? How do we know that a person or a piece of evidence is trustworthy? It is interesting to notice how an endeavor which aims to be completely transparent is so dependent on leaps of faith.
However, leaps of faith are not completely blind: scientists do not rely blindly on the work of others. Moreover, there are some documented mechanisms that scientists use to assess indirectly whether one is trustworthy. In general, the notion of trust is epistemic and, at the same time, has ethical ramifications because trusting another scientist means trusting that the person in question is both knowledgeable and truthful – so trust rests on both the epistemic and moral character of the scientist.
Decisions about whom to trust have additional ethical ramifications. Hanne Andersen lists a series of research misconducts (arguably data fabrication) where younger scientists committed the felony, and the seniors had simply "trusted" the wrong person. She points out that senior co-authors are somehow responsible to a certain extent for the work done by more junior scientists. Responsibility also implies a moral dimension in relation to trust. If there is misconduct, senior co-authors may not be legally charged with misconduct, but they are responsible, exactly because they have blindly trusted a person who, eventually, turned out to be completely dishonest. Therefore, scientists should make an effort to assess to the best of their capacity the moral and epistemic character of their peers.
For further reading:
- Hanne Anderson. "Co‐author Responsibility: Distinguishing between the Moral and Epistemic Aspects of Trust." EMBO Reports 15 (2014): 914–18.
- Susan Wagenknecht. "Facing the Incompleteness of Epistemic Trust: Managing Dependence in Scientific Practice." Social Epistemology: A Journal of Knowledge, Culture and Policy 29.2 (2015): 160–84.