Produced by the UGA Arts Collaborative, an interdisciplinary initiative for advanced research in the arts at the University of Georgia. All rights reserved. For more information visit: https://arts-collab.uga.edu
Mark Callahan: Welcome to Feedback! You're about to hear an interview with Mark Farmer, a Professor and Chair of the Division of Biological Sciences here at the University of Georgia. We're going to talk about peer review, and the role of mentorship in the sciences. This is our conversation series about critical evaluation across disciplines, and I hope you enjoy it. Here's Mark:
Mark Farmer: Peer review really is the gold standard, I mean, you cannot be considered a reputable scientist unless your work has gone through the process of peer review and it's very, I would say, defined and well structured. The primary thing, of course, is to have what we call peer-reviewed publications. And the process is such that anybody, you don't have to have credentials, you don't necessarily have to have the degree to submit a paper to any journal. Although, of course, in the sciences, most people in the 21st century who are practicing scientists do have the PhD. But in theory, you or your family or anybody else, if they have data and the data is well collected, well analyzed, is repeatable, and have ideas, can submit them to any journal from Nature and Science on down.
When that paper is submitted, the process is such that an editor will take that publication and anonymously distribute it to at least two, hopefully three experts in that field who will then under confidentiality read the paper, look at what the authors have had to say, look at the data and the way it was analyzed, and the conclusions that are drawn from it. Now, a reviewer and/or an editor may strongly disagree with the conclusions that are reached, but if the data is well collected, it's well analyzed, and if the ideas are well defended, even very controversial papers can get published. So a misconception, I think, in the general public is that a paper only makes it through the peer review process and gets published if the larger scientific community agrees with it. But in fact what they agree with is that a series of experts have looked at the data and concluded that it is valid data, that it was collected properly, analyzed properly, and in the case of where conclusions are unexpected or in some cases controversial, sometimes overthrowing paradigms, that's not necessarily a reason that a paper's going to be rejected by any publication.
Having gone through that process, then the editor will make final adjustments, usually it's a back and forth, a little bit, it's rare that a paper sails through on the first go-round, usually the feedback that one receives from the anonymous reviewers is extremely valuable. It can often point out potential flaws in the data or in the way that the material was analyzed, and it then goes back to the authors to revise, usually. Sometimes this is just a matter of rewriting and clarifying some passages; in other cases, it's actually going back into the laboratory and doing additional experiments that will bolster or support the conclusions that are made. So it is very much a dynamic process among your colleagues, among your peers, which is why it's called peer review. And so the only way that scientists can have confidence in ideas that are put forward, and the general public can have confidence in the ideas that are put forward, is to rely on this checks and balances system that we have in place to really keep - again, not to keep bad or controversial ideas down or out of the public sphere, but to make sure that those ideas that are put forward are well substantiated, are supported by data that other people could go and independently check if they choose to, and are not just the ramblings of an individual who happens to take bits and pieces of scientific-sounding information and weave them together into a story that is really more of an opinion piece than it is a scientific publication.
MC: Let's talk a little more about those reviewers. Who are they, how many are there, how are they selected?
MF: So, for publications, things that make it out into the press, usually those are a minimum of two, can be as many as four or even more. They're generally chosen by the editor or the associate editor of that journal. So this is someone who is knowledgeable about the field but may themselves not be an expert in the field - the editors - but the editor's job, then, is to identify true experts and to pass on the manuscripts to them, and to receive that feedback anonymously and then to deliver it back to the original authors anonymously. This protects, of course, the integrity because you don't have to necessarily worry - I remember reviewing papers as a junior scientist, as an assistant professor working my way up, and in some cases reviewing papers that were written by senior colleagues that I was likely to encounter at a big national or international meeting - If I was critical of their work, or in other ways demanded that they do things that would improve the work, you certainly want to protect people from that kind of retribution.
In the case of federal grant funding, that's another area in which people don't fully understand how a scientist would, let's say, get their grant approved and have monies from one of the federal funding agencies delivered to them and the University of Georgia to conduct the research. And that's a bit of a more elaborate process, again, you have the equivalent of an editor, which would be a program officer at either the NIH [National Institutes of Health], the DOE [Department of Energy], the NSF [National Science Foundation], one of these large funding agencies. And they will not only rely on external reviewers, again anonymously, but also will call a panel together and at that panel there will be a minimum of three scientists who have carefully read the grant proposal and they will openly discuss it in a room with anywhere from twelve to twenty other experts in the field. And then an open discussion ensues about the merits of that particular proposal and recommendations.
MC: This is an actual meeting of people in a room together.
MF: Face-to-face, twenty people, sometimes for two or three days at a time, eight hours a day and discussing proposal after proposal, and the relative merits of them. And ultimately arriving at a ranking so that the program officers whose job it is to effectively administer, just like an editor has limited number of pages available, program officers have a limited number of dollars available. And so they take that information and make funding recommendations and basically spend the money as best they can until it runs out. But again, when you look at an example in which a research grant has been approved, has made it through that process, I think the average citizen can have a lot of confidence in knowing that it wasn't somebody's friend at the NIH that gave them this money, it wasn't somebody's uncle that they knew, you know, or some sort of connection, it really goes through a very, very rigorous process. And again, much like the publication of papers, it's quite unusual that a grant gets funded on its very first application. Usually that feedback system, again, is very, very valuable to the researcher to refine their arguments, to strengthen their arguments, perhaps to do additional experiments and collect additional data to support the hypothesis. So it's a very constructive process. It sometimes doesn't feel that way to the researcher who gets rejected, but it can be and is intended to be a very constructive process in which the final product then is a much better entity.
MC: And what's in it for the reviewers?
MF: Ah... not much. As I said, you can, many people do, of course, list participation in the review process as part of their professional contributions to the field. It has to be done carefully, it has to be done somewhat anonymously, in other words. I was a panellist for the NSF in this program, but I'm not going to say when or where, again to protect the integrity of the system. I think the thing that's really in it for the reviewers is that they benefit from the integrity of the process, and if they are unwilling to participate in that, then they're effectively rejecting what is the standard in our industry. So it's a civic duty, it's a professional duty.
MC: Is there a formal mechanism for being selected as a reviewer?
MF: It's based on reputation. It's one of the things that I did as a program officer at the NSF, was to intentionally seek out junior scientists in their field. One, because they're often more knowledgeable, they're on the cutting edge of many things. But the other thing is it's incredibly instructive, and it's a form of mentoring. I once was thanked profusely by a young scientist who I had invited to serve on a panel, and at the end of the three days she was just exhausted, and worn out, and your brain just doesn't work any more, and she thanked me, because she said she learned so much about the process by being on the inside that it would make her a much, much better grant writer and a much better scientist in the future.
MC: So what makes that process successful?
MF: Certainly having the right expertise in the room is critical. One of the most frustrating things that any scientist goes through, and we all go through it, is when we get back feedback from these anonymous reviewers and it's clear that A) they didn't read our paper carefully, or B) that they don't know what they're talking about. The feedback mechanism is such, at least with paper publications, that you have the opportunity to challenge the reviewers' conclusions. You can write back to the editor in a formal and informed way saying, I respectfully disagree with this particular reviewer's comments, as you can see on page four, I address that point, as you can see from this paper that I cited this work has already been done, whatever it is that you perceive as an error in judgement as part of the review process.The same is true for grant reviews. The problem is that you often don't have an opportunity for rebuttal until the next year, because many of these programs have moved to a once a year submission date, which means that by the time you submit your grant, the external reviews come in, the panel meets, the feedback is delivered back the author, we're talking a lag period of perhaps six months. Then, in order to address those points you can't simply write back to the program officer and say, but wait a second, I said that on page four, give me the money! The process, unfortunately, doesn't work that way, and instead, what you have the opportunity to do then is to resubmit that proposal, often directly addressing or more anticipating those critiques you received from the first part of the process.
MC: Can you talk about some of your first experiences with peer review?
MF: It's funny, my very first experience as a graduate student, I had a paper published, or, I should say, accepted for publication. And it had even gotten - this is in the pre-electronic days - and had even gotten to the point where I had received back from the editor the page proofs, in other words, the mock-up of what this paper was going to look like when it finally came out in hard copy. And I can remember the excitement as a third-year graduate student, holding in my hands something that was soon going to be out and read by my colleagues around the world. And ironically, the day before I had been working in the library, the Princeton library archives, and had been flipping through a journal that was published in the 1860s, and I turned the page, and there in a very, very small figure, no more than half an inch, was a drawing of the organism which I said was new to science, and which Felix Dujardin had published on in 1865. So I was faced with the very, very uncomfortable conclusion that in terms of priority, in terms of scientific integrity - so my paper had gone through the review process, other experts in the field had concluded, as had I, that this organism was new to science, and was worthy of this new name that we were creating for it, and I found I was staring, literally, at evidence that said otherwise.
I can remember walking into my major advisor's office, who was a co-author on the paper with me, and saying, so if somebody published on this before us, what would that mean? He says, well, it would mean that our paper is invalid. And I said... yeah. I presented him with photocopies of what I had found in the Princeton archives. He said a few choice words, but in the end we immediately contacted the editor and said, we have to retract this paper, we can't go forward with its publication because we know for a fact now that it's incorrect. So what I did instead is I rewrote the paper, certainly I had images, I had information that Felix Dujardin in 1865 could never have had. So I rewrote the paper acknowledging his priority, acknowledging his work but adding considerably to our knowledge of this organism, and I sent it out for review again.
This time it made its way to a fellow who recently passed away named Paul Silva, and Paul is one of these giants in my field, who just had a photographic memory, and he said, well this is a very, very good paper, but you've clearly missed these two important references, one from 1882 and the other from, like, 1890. And he gave me the names and citations of two very, very obscure German journals, of which only two institutions in the United States even had a copy of these things. And sure enough, he was absolutely right, I had missed those two papers. But it really struck me, the value of having an expert like Paul Silva to double-check my work. It's not that it invalidated my work, but it made my work more complete. And so I held, again, as part of a back-and-forth process of getting a paper published, I went and I got copies of those papers, translated them from German, incorporated those comments into my paper, and then re-submitted it. And it sailed through. So the ultimate story is my paper eventually got published, after once having to retract it before it got published, and also having been humbled by the peer review process, by a true expert in the field. But in both cases it worked out better for me, and for the paper.
MC: There seem to be a few lessons from that.
MF: Yeah, absolutely. And again it goes back to an earlier question: what prompts people to do this? And I would say it's our commitment to the integrity of the system, which again - I don't know if that's widely known or recognized by the general public. I often tell people that the most critical people of scientists are other scientists. We really do pay attention to detail, we really do critique each other's work. Not because we're not supportive of each other, but because we believe so highly in the integrity of the system that we recognize that those rare occasions where fraudulent papers do get published, where results get hyped that are not accurate, that are not correct, really damage all of us. And so we're very vigilant about trying to make sure that that doesn't happen.
And it also gave me a great sense for paying attention to the science that has come before us. There's an old saying that one sees so far because you stand on the shoulders of giants, but it really struck home to me in reading these papers from the 19th century just how good my predecessors had been, how carefully they had done their work. And although they didn't have microscopes or equipment or techniques that I was the beneficiary of in the 1980s, they did amazing things. They saw amazing things, and for the most part they were very, very accurate. So they described things that they couldn't quite understand, and I come along a century later with new techniques and was better able to explain what they were seeing. So it was really, I would say science is this ongoing collaboration, not just with the peers you're working with today, but with all those scientists that have contributed to the field in generations past. I would have made Dujardin a co-author if I could have, but...
MC:Let's talk about some of the face-to-face feedback experiences for a scientist.
MF: These mostly occur in two phases. One is, of course, during the professional training that one goes through as one moves both through the undergraduate process, graduate school process, post-doctoral process, and even as a junior faculty member, and it never really stops. The other place that you get that kind of feedback is at major meetings. And it's one of the things I feel very strongly about, is trying to make opportunities for graduate students early on in their developmental process to get that kind of feedback.
MC: These are the conferences that take place around the world?
MF: The conferences, right. And people might say, well, what's the value of that? Why can't you just send them the paper, and they can read your paper? And the reason for that is that many of the ideas that are put forward at these conferences are ones that are in development, are ones that are not finished products, and ones that can - the value of which can be greatly enhanced by the feedback that you get, again, from face-to-face experts. And again, going back to one of my earliest memories as a graduate student, my very first poster at a big national meeting was taking place in San Antonio at the cell biology meetings, and I had a poster that I was very proud of, and it was based on work that was largely done by a husband and wife team, [Ada] Olins and [Donald E.] Olins. And I was at my poster, and this woman came up, and showed interest in my research and was asking me questions about it, and I was carefully explaining to her the data, and I looked down at her name tag, and it was Dr. Olins! I was like, okay, so the one person in the world who knows a hundred times more about this subject is standing right next to me offering me feedback. That doesn't happen from just submitting papers anonymously, and for the most part I would say that it's one of the most constructive things that happens at these meetings.
The other thing is that a lot of good science is done over coffee, and just sitting around and just exploring ideas with your colleagues is something that can really only best be done, even today, by gathering together in a single place and spending a whole week thinking about your discipline. And so there's a real value that comes from that type of feedback. Later on, when you get invited for seminars and go around and meet other colleagues at other institutions, certainly that's another opportunity. But for the most part the role that national, even regional, national, and international meetings play in the development of science is still a critical one.
MC: What advice would you give to someone who has just submitted the paper or the grant proposal?
MF: The advice most mentors would give is, prepare for rejection. Rejection is the norm, not the exception. And that, for the most part, that, if delivered properly, that rejection will ultimately result in a better product. That's the first lesson. The second lesson is try not to get discouraged. Things have really changed in terms of the overall success rate, not of getting papers published, but of getting grants funded. This is perhaps the most troubling aspect that's occurred in my career in terms of how you fund research, how you keep young scientists engaged in the process when, quite frankly, the prospects for success are becoming increasingly smaller and smaller. And how universities face this in the 21st century is going to be very telling, because I don't, quite honestly, see a large advocacy group for going back to the way things were, even though in my opinion they worked quite well, and the United States and our economy benefited tremendously from this investment in basic research.
MC: And now, in your career, where you've seen the process from a number of different vantage points, what kind of advice would you give to future mentors?
MF: For future mentors I would say the best advice would be to set reasonable expectations, to recognize that the world is changing, to recognize that the challenges faced by your graduate students and your post-docs and your junior colleagues are different from the ones that you faced coming up. The other thing I would say is to focus on broad training, and that the pathways to success, the careers that are going to be available for individuals are not as unidimensional, are not as clear-cut as they have been in the past. And, I think, in order for our trainees to really be successful, we have to be mindful of that. How does that actually then get implemented? Well, if I have a post-doc or a graduate student whose research is really what I want them to be doing forty hours of the week, fifty hours of the week, and I don't want them spending any time teaching, I don't want them spending any time working with undergraduates training them, I think I'm doing them, actually, a professional disservice. I think that is an important aspect that in fact may turn out to be very, very helpful.
MC: Feedback is a production of Ideas for Creative Exploration, an interdisciplinary initiative for advanced research in the arts at the University of Georgia. I'm Mark Callahan. This podcast was produced with the assistance of Fernando Deddos and featured music by The Noisettes.
Recorded on November 20, 2014.
Transcription by Scott Eggert