Keep it complex? Journal Club with Stefano Golinelli

Informing policy debates is at the heart of the Grantham Centre’s mission. But is reaching policy positions informed by hard scientific facts really as straightforward as it might seen. At this week’s Journal Club, Grantham Scholar Stefano Golinelli led a discussion on how researchers can handle hidden complexities.

This week’s paper

Keep it complex‘ by A. Stirling

stefano-golinelliNow more than ever, public debates raise the need to have policies informed by the authoritative judgment of experts. For instance, forming a new generation of scientists capable of contributing to achieving food security and environmental sustainability lies at the heart of the Grantham mission. And yet, is it really possible, and always desirable to be ruled by science-based decisions?

I am well aware that science is vital for a healthy decision-making system but, as a political scientist, I am also inclined to look at the other side of the coin – that is the multiple ways through which policy-makers and various lobbies exploit science to serve their own interests. As such, I believe that the concept of science-based policymaking shouldn’t be approached uncritically, and I felt that this paper – written by a scholar with multiple experiences in scientific advisory committees – would provide important insights for my colleagues and me.

Andy Stirling’s main argument is a call for humility about science-based decisions. Indeed, knowledge is inherently plural (in that truths can be arrived at from multiple perspectives) and often incomplete: all issues that he illustrates showing the huge variability of scholars’ appraisal of the health and environmental risks associated with different energy technologies. Accordingly, uncertainty should not be hidden through practices – such as risk assessments – that seemingly provide definitive interpretations. Rather, uncertainties should be explicitly acknowledged and addressed through forms of scientific advice that take them into account. As shown in the Uncertainty Matrix, there are manifold approaches to enabling a more plural and conditional treatment of issues distinguished by incomplete knowledge. They entail a combination of quantitative and qualitative methods, and recognize minority opinions as well as alternative framings of possible options, contexts, outcomes, benefits or harms. These exercises may not provide policy-makers or the public with clear answers but, in the author’s perspective, they are critical to make scientific advice more rigorous, robust and democratically accountable.

As I expected, the other Grantham Scholars’ reactions to this reading were quite mixed.

On the one hand, most of us were fully aware of plurality of science as well as the potential interference of interests groups into science and policy processes. In this latter regard, a colleague mentioned how powerful pressures framed reactions to obesity in terms of the need for more exercise, although scientific research seems to indicate that a drastic reduction of sugar consumption would constitute a more relevant solution.

Yet, on the other hand we all wondered about the difficulties of following the author’s suggestions without damaging the added value that scientific advice can provide to policy decisions. Indeed, how is it possible to decide which opinions should be included in plural scientific advice? And, even more importantly, how can we help interested parties in assessing the credibility or plausibility of different claims?

Exactly because we are aware of the inevitable plurality of scientific views, we noted the risks of envisaging all opinions as equally valid. For instance, we discussed the debate on vaccines: here, some authoritative scientists raised concerns about their risks, but the vast majority of the scientific community remains convinced of their safety. From a scientific point of view, discussing the former position is absolutely necessary and worthy. However, wouldn’t its legitimation by official scientific reports stimulate behaviours that, accordingly to most scientists, entail significant risks? Furthermore, just as it enables exchange and deliberation, acknowledging dissenting voices may also facilitate including pseudo-scientific arguments that are intentionally crafted to serve specific interests. As such, attempts to democratize decision-making processes would, on the contrary, create further opportunities for manipulation.

In sum, while not being directly relevant to most of our projects, this reading provoked a vibrant debate in the room. Some of us were concerned that a move towards plural and conditional expert advice would exacerbate “the perils of group dynamics or the perturbing effects of power”, while others concurred with the author that, while not being a panacea, this move would at least make “these influences more rigorously explicit and democratically accountable”. In a sense, I would argue that our disagreement is in itself an illustration of the plurality of knowledge, and of the inevitable controversies that arise in the sphere of politics. Like it or not, decision-making is, and will remain extremely complex and controversial.