During the last eight years, I have peer-reviewed about 100 manuscripts for the industrial ecology (IE) community. I now accept around two invitations per month, and enjoy (in most cases) learning about fresh research from the various branches of the field and duly checking and giving feedback on it. When accepting review invitations from the editors of the various journals that serve as outlets for IE research, I tend to focus on applications of material flow analysis and socioeconomic metabolism, dynamic modelling, input-output analysis, life cycle thinking, as well as method development and conceptual pieces. In most cases, I accept manuscripts, which, as far as I can tell from the abstract, I should read anyway once published, and therefore, I consider peer review not as burdensome duty but as an integral part of my workflow.
Usually, I spend between two and four hours for reviewing a manuscript and the additional material supplied. That may sound like little time for scrutinizing months and sometimes years of the work of colleagues, but I have some routine in the procedure and see my job rather in giving the authors the feedback they need (and deserve) to move on with their work than in understanding each bit of the research in great detail. Of course, getting a holistic impression of the work (novelty, overall conveyance of the message, consistency of intro, research questions, methods, results, and discussion, language and style, quality and clarity of the text, framing) while spotting the relevant problematic details (gaps in literature coverage, logical gaps or redundancy, unclear method and data descriptions, unclear or erroneous figures and tables, gaps in discussion) at the same time is a must for each review.
For the vast majority of the manuscripts, I recommend a major revision in the first round of review. For me, the correctness and quality of the work done and its presentation determine the outcome of the review, not so much whether I think it is relevant or interesting (this should be decided by the journal editor and the readers, though some colleagues have a different opinion). There is no consistent pattern of problems with incoming manuscripts, but rather a set of typical reasons for me recommending a major revision, such as:
- gaps in the documentation of data and/or methods, either in the manuscript or the appendix, which make it impossible to understand crucial details of the research procedures
- unclear or erroneous figures or tables; irrelevant or missing discussion items
- inconsistencies in the manuscript like disconnection between introduction and method choice
- unclear or confusing phrases or even sections.
There is no perfect manuscript, but issues that hamper the readers’ understanding or just mistakes or omissions must be fixed. Not often do I recommend rejection of a manuscript; an example are manuscripts of conceptual studies with content that is not well though through, so that I fear that such a paper would add more to the confusion rather than clarifying things. Trivial mistakes, sloppy work, or mere negligence of the authors annoy me but if good work is shining through, they do not stop the show. When asked for a second review, I am usually satisfied with the authors’ response and then recommend minor revisions or even acceptance right away.
I have learned from mistakes and try to eliminate any emotions from my feedback. I try to always be objective and polite, especially when expressing substantial criticism, but I do not hide my opinion or sugar-coat things. After all, it is science we are talking about! When an editorial decision is made or a revision is assessed, I get to read the comments of the other reviewers. I am often positively surprised how well my feedback aligns with the comments of the colleagues, while we all still have many specific things to say. My recommendations regarding the level of revisions required often matches the judgement of other reviewers, though I have the impression that my often detailed and pointed feedback makes editors label my comments as the ones of the much-feared ‘reviewer 2’.
When receiving an invitation to review from an editor, the only content I get is title and abstract of the manuscript. Hence, the abstract is a crucial element since for many potential reviewers, it is the basis of the decision on whether or not to accept the review invitation. That means that the language of the abstract must be extra concise and provide an excellent description of the problem setting, methods, and result summary. A badly written abstract usually means that I decline the review request. What also makes me sceptical are abstracts that state the problem in great length but barely describe the research conducted. Also, abstracts of quantitative studies that do not contain any numbers, just qualitative descriptions or result summaries, often lead to a decline. If the authors don’t bother to report a main quantitative finding in the summary, why should someone bother to read their work?
The increasing number of high quality manuscripts over the last years marks a positive trend in our field. While some established research groups constantly deliver high quality work, many excellent research reports come in not only from the IE ‘founding regions’ North America, Western Europe, and Japan, but from a wider set of countries. The increasing amount of highly relevant and well-performed work from our Chinese colleagues is particularly impressive and a testimony of the growing relevance of IE research in the country.
To me, peer review remains a crucial step in the quality control of scientific work. From my experience as reviewer and author I can say that for manuscripts based on sound research but of average documentation and presentation quality, the peer review process leads to drastic improvements. Constructive criticism by peers is motivating and reassuring. Equally important: both editorial screening and peer review filter out substandard or even fake science, which is crucial to ensuring the credibility of the scientific enterprise, especially now that there are many predatory and low quality journals around. In my opinion, MDPI journals are the bottom line both for publishing and for review.
Many things need to improve in the future to secure the quality of scientific work in industrial ecology. For me, the most important improvement area is the reproducibility of all modelling and data treatment activities that lead to the results presented. Here, we need more best-practice-examples and subsequent guidelines, so that authors know what steps to document and how to do that, and reviewers have an easier job when assessing the transparency of the work done. Another upcoming discussion item relates to the openness of the review process and with it the archiving of manuscripts on preprint servers. I say yes to preprint archiving as a way to speed up the dissemination of scientific insight that will also lead to quality improvements as the submitted (=archived) manuscript will be openly accessible. I do think that anonymous peer review will still be needed in addition to open comments, especially when the work submitted has major flaws that need to be addressed directly, and here, anonymity may help.
What are your thoughts on the future of peer review and scientific publication in IE in general? Share your views on the IS4IE forum under https://is4ie.org/forum/general/7
- On the future of scientific publishing for sustainability research
- Sustainability and Transitions Research: Taking the High Road
- New Guidelines for Data Modeling and Data Integration for Material Flow Analysis
- Good Scientific Practice in Industrial Ecology – A Factsheet
- Launching the prototype of an Industrial Ecology Data Inventory