Sustainability and Transitions Research: Taking the High Road

In a recent opinion paper, [1] Julian Kirchherr complains about that “up to 50% of the articles that are now being published in many interdisciplinary sustainability and transitions journals may be categorized as “scholarly bullshit”. These articles, Julian states, “typically engage with the latest sustainability and transitions buzzword (e.g., circular economy), while contributing little to none to the scholarly body of knowledge on the topic.” Despite their poor quality, these articles “tend to accumulate significant citations and are thus welcomed by many journal editors”. System-wide incentives to focus on publication metrics drive “more and more authors into publishing on the very latest buzzword, e.g., ‘circular economy’, which creates a perpetuum mobile respectively vicious circle (depending on your perspective) regarding publications on such topics. […] All contributors (journal editors, authors) know they may be producing scholarly bullshit; however, publishing such works is advantageous for everyone involved in this contemporary academic system.”

Julian’s piece is a refreshing reading indeed, as it candidly expresses a frequent impression shared by many sustainability and transition researchers: That the content of some publications does not justify the impact they generate. The general fear is that a highly self-referential publication bubble emerges, with little actual knowledge and ultimately, little real-world decision support and impact. And that this state would be, despite the success on the surface, deeply unsatisfactory for the colleagues involved and fundamentally wrong by the core ethical standards of science.

We don’t have metrics for evaluating the actual knowledge generation, quality, relevance, and impact of research. Probably because these aspects are immeasurable. Instead, we use indicators, like citation counts or altmetric scores (based on internet activity around a publication) as proxies for impact and quality. History books, managers, and behavioral economists know that metric-based evaluation can create perverse incentives. Some funny and many tragic examples are listed on the Wikipedia. [2]

In other words, Julian’s impression is that about half of what is published in sustainability and transitions research only serves the proxy success indicators and has de-coupled from the actual, the true mission of sustainability science. The feedback he got on Twitter shows that many colleagues from the field tend to agree with him.

This problem deserves further investigation and discussion. In particular, it would be interesting to find out how the sustainability and transition research community ‘performs’ in comparison to other fields. Like psychology, for example, where there is also a wide debate on the value of much of the research published in the last decades due to wrongly applied statistical methods and strong publication bias towards interesting and ‘significant’ findings. (Check for the so-called replication crisis. [3,4] Interestingly, the share of research there that is not reproducible is also 50% and more… Maybe a 50% bullshit rate is the new norm …)

While I tend to agree with Julian, the sad part is that he hardly touches upon the solutions to reducing ‘bullshit’ that are already in place. So, let’s get to the core message here: No matter how big this problem is, we know how to fix it! While scientists, as all humans, respond to behavioral-economic incentives, we are constrained in our actions by the ethical principles of our fields. A core ethical principle is the adherence to the state of the art of our respective research field.

Science is a self-governed enterprise. This means that the standards for good scientific work are not set and enforced by external auditors, but by the respective scientific communities and their members, the scientists, themselves. This is to ensure scientific freedom and independence and because the community members are the experts and no one else. From this freedom a great responsibility arises. Each field has the responsibility to organize itself to define what constitutes sound research in this particular field and what doesn’t.

The standards of sound research are continuously enforced by the members of the community: During teaching, when advising doctoral students, during journal editing and peer review, during admission of submitted abstracts to conferences, when selecting which papers to cite, and in funding and promotion decisions.

The standards for conducting state of the art research are never static. They evolve over time to take into account new ethical standards, technical possibilities, and inter-disciplinary research approaches. For example, conducting double-blind psychological and medical experiments has only become the gold standard in the middle of the 20th century. [5]

The International Society for Industrial Ecology (ISIE), [6] for example, has worked for decades with its different organizational bodies to establish and raise the standards of sound systems research to explore how material and energy are used by society and to find solutions to complex environmental problems, including the challenge of establishing a more circular economy (CE). Examples of this work include:*

  • Contribution of many industrial ecology (IE) community members to the ISO 14040 and 14044 standards for life cycle assessment and to the system of environmental-economic accounting (SEEA) via the guidelines for economy-wide material flow accounting.
  • Development of data transparency and accessibility guidelines for publications in the Journal of Industrial Ecology [7] led by the journal’s founding editor Reid Lifset and the then-president of the ISIE Edgar Hertwich
  • The work by the ISIE sections, e.g., the section on socio-economic metabolism to establish guidelines for transparent and reproducible research [7a]
  • Development of teaching material with best-practice examples of how to tackle recurrent research problems [8]

In addition, I have tried to cast my own modelling and general research experience into recommendations, focusing on cumulative research for IE/CE modelling software [9] and sustainability research in general. [10]

Given that Julian’s piece targets circular economy research, in particular, it becomes clear that the circular economy research community members need to raise their standards of what constitutes sound research and enforce them in their different activities. In particular, the newly established circular economy research organization [11] needs to expand its activities to include the formulation of best research practice.

We all have a moral obligation to further develop the state of the art of research and to live up to the standards set by the community. Accepting and fulfilling this obligation is the High Road to sustainability research of high quality and relevance.

While we cannot measure the actual knowledge generation, quality, relevance, and impact of research, we do have a broad scientific community of sustainability and transition research that values sound science!  And even if we can’t identify them by measurement, the scientific community will recognize and reward the authors of sound contributions to science, while it will tend to ignore and ultimately forget about those contributions that don’t meet its standards.

 

*) Please write a message to in4mation@indecol.uni-freiburg.de if you have other examples that should be listed here!

 

References (access date: June 2nd, 2022)

[1] https://doi.org/10.1007/s43615-022-00175-9

[2] https://en.wikipedia.org/wiki/Perverse_incentive

[3] https://www.nature.com/articles/nature.2015.18248

[4] https://en.wikipedia.org/wiki/Replication_crisis

[5] https://en.wikipedia.org/wiki/Blinded_experiment#History

[6] https://www.is4ie.org/

[7] https://jie.yale.edu/data-openness-badges

[7a] https://doi.org/10.6094/UNIFR/217970

[8] http://www.teaching.industrialecology.uni-freiburg.de/

[9] https://onlinelibrary.wiley.com/doi/10.1111/jiec.12316

[10] https://doi.org/10.1038/s41893-019-0443-7 (also available via sci-hub.ren via DOI search)

[11] https://www.is4ce.org/

 

 

4 thoughts on “Sustainability and Transitions Research: Taking the High Road

  1. Dear Stefan,
    is it really your impression that there is so much bullshit out there? I wonder. I never go through these journals mentioned by Julian to look at everything. I always search for specific topics. When I do that, I find so much interesting stuff, good work, novel approaches, I am impressed by the progress in our field. Now, I have the sense that there are many CE ‘experts’ out there, attracted by all the hype, maybe they all write in these journals with dubious reputation.
    I agree that in principle, peer review and good review standards are the solution. You point out that continuous investment by the practitioners in a field is required to ensure good quality. That is really important. At the same time, we have our governments pushing for ‘open access’ publications. It is notable that many of the journals that publish bullshit, like those from MDPI (publisher of Sustainability, Energies, Materials …) and Frontiers are recognized as not upholding peer review standards. In Norway, MDPI journals just lost the recognition as an academic publishing channel. It is clear that the pay-to-be-published model of OA journals generates incentives to accept anything that looks like an academic paper. I think providing access to the results of publicly funded research is a conundrum that we have not found the solution to.
    There is a BTW (by the way). One of my professors in grad school was a pioneer of Ecological Economics. He told me that a lot of scholars entering the field needed to ‘see the light’. As a result, they all published similar conceptual papers, that were not new to the field, but evidence and product of the intellectual development of the individuals. So, maybe we need outlets that are more high-quality debate journals or blog channels as a complement to scientific journals, where such work can be published. It might have a social value, even if it lacks scientific novelty.
    Edgar

  2. Thanks for sharing your thoughts Edgar!

    For me, it is quite the same: I focus on the good stuff! Think of the publications on the energy service cascade, the stock-flow service nexus, the biomass sustainability assessments, the many good case studies on material efficiency that we use in our own work, or the more and more transparent LCAs on new energy technologies. Still, we have too little transparency in many studies, too much confusion with system boundaries and an inconsistent mix of attributional and consequential perspectives, too many poorly defined indicators due to lack of underlying system definitions, and too many frameworks that are not properly connected to established system perspectives. That means that the contribution of such studies to a cumulative body of knowledge is limited.

    I also agree that it is important to have publication channels where scholars can document their own intellectual development, and let the community (e.g., via citations) decide later on the impact or importance of this work, i.e., whether it is a relevant contribution to the field.

    Moreover, I think that this ‘impact dimension’ is different from the ‘quality dimension’. Good research is both of high quality and makes a relevant contribution. In practice, both dimensions can be traded against each other to some extent. For example, a ‘trailblazing’ paper could be based on some rough analysis but raise a crucial issue, whereas a publication documenting the learning process of a doctoral student may be of highest clarity and consistency but with a rather incremental contribution to knowledge. A high pressure on having both high quality and high impact at all times may quickly become unhealthy, especially for researchers that are still in their training. Here, high pressure may lead to a situation where the quality dimension is neglected to a degree where ‘scholarly bullshit’ is produced, especially in communities and journals where quality control does not work properly. During research training, especially when working with doctoral students, I prioritize the quality over the impact dimension, as I firmly believe that being able to make relevant (=high quality and impact) contributions to a scientific field first requires sound research training focusing on quality, including adherence to the standard of the field.

    If the authors fully stand behind their work in terms of the quality, I also consider MDPI journals as legitimate outlet, especially in cases where a learning experience of an individual researcher is documented. But I am well aware of the problems with these journals and I’ll take a closer look at the rationale provided by the Norwegian National Publication Board for removing them from the list of credible publishers (https://npi.hkdir.no/organisering/npu/referat?id=1109 in Norwegian only) and also Oviedo-García 2021, doi: 10.1093/reseval/rvab020

  3. Unfortunately I see a bias in papers getting good citations and prestiguous grants awarded towards subjects that are ‘en vogue’. In highly competitive settings this makes the difference of that point or 2 that decides about being awarded or not. It may also be more a problem in social sciences where empirical validation is much more difficult and many different schools tend to compete to explain the same kind of subjects. Julian is provocative (as usual), but there seems a grain of truth in his point.

  4. Dear Stefan,

    thank you that you took the time to point to solutions in your blog. Some things you may find interesting:

    1) replicability is not always ensuring the validity of results best and is being increasingly debated in medicine and psychology – see: https://www.nature.com/articles/d41586-018-01023-3

    2) there is current efforts by the German Research Foundation to tackle the obsession with metrics: https://www.dfg.de/service/presse/pressemitteilungen/2022/pressemitteilung_nr_15/index.html#:~:text=15%20%7C%2018.,Grundlage%20und%20Gestaltungsfeld%20der%20Wissenschaftsbewertung&text=Das%20wissenschaftliche%20Publikationswesen%20ist%20einer,auf%20die%20Wissenschaft%20haben%20können.

    I agree with Edgar’s comment but would like to add some other reasons:

    History demonstrates that much research was labelled “bullshit” or “useless” by contemporary peers, which later turned out to be groundbreaking at times. So the stance that one of us can easily categorize what “bullshit” looks like – especially in an extremely large and diverse field such as sustainability (transitions) – seems overly provocative. Unfortunately, being overly provocative risks making unreflective and attention-seeking arguments rather than starting a solution-oriented dialogue.

    Certainly, the current research “market” is overheated and people are being pressed to publish a lot fast. At the same time: Does that make their research automatically bullshit (in 50% of cases)? Maybe the many publications just hide important results more and this is one reason why people tend to become so obsessed with literature reviews (which the piece also generally criticizes as bullshit).

    To resolve the many issues that current science faces, I would find it very helpful to collect and orchestrate ongoing efforts and debates to change the system (like the two listed above). To do so, we need a task force – which the sustainability science community is predestined to build!

Leave a Reply

Your email address will not be published. Required fields are marked *