91亚色传媒

Professional Development

Chasing unicorns

How do we define productivity?
Peter Kennelly
Nov. 1, 2018

In May 2017, the National Institutes of Health to cap the number of awards that an individual investigator could receive in order to free up funds to invest in young investigators. The science-funding agency justified this decision in part by citing published analyses indicating that productivity per dollar awarded began to decline as investigators accumulated multiple concurrent R01 grants.

The James H. Shannon Building, also known as Building One, on the National Institutes of Health campus in Bethesda, Md.Courtesy of Lydia Polimeni/NIH

The plans were quickly retracted (although the NIH’s neuroscience institute announced plans this spring to ), but the proposal highlighted the continuing desire of many institutional administrators to devise metrics for quantifying the productivity of academic scientists.

When looking at , a universal theme becomes evident — the measures of productivity focus exclusively on either the number of publications generated or some derivative thereof, such as impact factor or citation index. No effort is made to assess, nor is any explicit value attached to an investigator’s contributions to the training of the next generation of scientists.

Counting only co-authored papers is like rewarding someone who produces a given quantity of lumber by clear-cutting a patch of forest to the same degree as someone who produces the identical quantity by carefully selecting the trees to be removed and replanting afterward. If we as a community are to inform our decision-making processes with data pertaining not just to the immediate quality but also the long-term sustainability of the biomedical and molecular life science research enterprise, then education and training must be incorporated into any discussion concerning metrics of productivity.

One may counter by saying that a causal, linear relationship exists between the amount of publishable research generated by a student or other trainee and the quality of the training received while resident in the laboratory where the research was performed. However, while output of publishable data from individual students and postdoctoral trainees may vary greatly, in general, the capacity to generate publishable data increases with the amount of experience and training received over time, reflecting all the students’ educational and training experiences. The value we implicitly place on prior training becomes evident whenever a principal investigator makes decisions about how to staff a laboratory or what expectations to place on new and current group members. In both hiring and admissions, experience as an undergraduate research student, summer intern, graduate student and so forth carries weight across academia, government and industry. Yet no accepted mechanism exists for recognizing and crediting undergraduate and graduate research mentors for their contributions to their students’ long-term success.

How can we remedy this? One way would be to give prior research mentors a share of the credit for their trainees’ subsequent publications. While an admittedly imperfect measure of a mentor’s education and training contributions, allowing former mentors to list themselves as “shadow” co-authors in progress reports and biographical sketches would allow a single metric to express both their immediate and long-term contributions to the research enterprise. It also acknowledges that our focus on publications as the ultimate currency for determining value likely will persist for many years to come. Under this model, progress reports and biographical sketches would include lists of papers on which a scientist-educator participated directly as well as those that benefited from their training activities.

Should papers by former trainees be counted the same as papers where the investigator is an explicit co-author? Should a paper published two years after moving on from a mentor’s tutelage count the same as one published a decade later? Current models focused on research productivity already struggle with weighing how much a given publication reflects the contributions of each of its authors: Should a three-author paper on which a researcher is listed as second author be weighted equally with a second authorship on a six-author paper? This has not, however, kept the scientific community from using paper counts in some form or another as the default metric for assessing productivity, progress or impact.

Given that current systems are both imperfect and persistent, how could (or should) we weigh papers by trainees? First, I would propose a time limit. Only papers authored by a former trainee during the next stage in their training would be eligible. So if one mentored an undergraduate research student for at least one academic year, as documented by transcripts, only papers containing work performed as a graduate student would be eligible. Similarly, a graduate student’s former major professor would be able to cite work done during the student’s (first) postdoctoral training position.

How do we translate this into a number that can be added to traditional research publications to give a total paper count? An undergraduate research mentor could be credited for, say, a tenth of a publication for every first-author paper their trainee produces in graduate school and perhaps 5 percent of second-author publications. Given the more intensive nature of graduate training, perhaps these figures could be raised to 20 percent and 10 percent, respectively, for a major professor. In this scenario, when a principal investigator fills out their progress report for a three-year grant award, they would be able to cite not just the three papers on which they were a co-author but also the two first- and three second-author papers published by their former graduate students as postdoctoral trainees during that same period and the three second-author papers published by their former undergraduate research students. So instead of a paper count of 3, their count would be 3.0 + (2 x 0.2) + (3 x 0.1) + (3 x 0.05) = 3.85, nearly 30 percent higher than someone who had no former students publish.

Does this formula give too much or too little credit for training contributions? Readers can and undoubtedly will raise numerous objections to my approach. However, what should not be in dispute is that in these times of tight funding, regulatory micromanagement and administrative obsessiveness with accountability, it is more important than ever to focus on the sustainability of the research enterprise when making strategic decisions such as where to allocate resources. To do so, we must more explicitly and generously reward the educational and training activities that develop the intellectual infrastructure upon which “productivity” relies.

Whatever the merits of the “credit for future publications” model described above, I hope it will provoke reflection and discussion of how we assess the success of scientist–educators.

Enjoy reading 91亚色传媒 Today?

Become a member to receive the print edition four times a year and the digital edition weekly.

Learn more
Peter Kennelly

Peter Kennelly is a professor of biochemistry at the Virginia Polytechnic Institute and State University.

Get the latest from 91亚色传媒 Today

Enter your email address, and we鈥檒l send you a weekly email with recent articles, interviews and more.

Latest in Opinions

Opinions highlights or most popular articles

Our top 10 articles of 2024
Editor's Note

Our top 10 articles of 2024

Dec. 25, 2024

91亚色传媒 Today posted more than 400 original articles this year. The ones that were most read covered research, society news, policy, mental health, careers and more.

From curiosity to conversation: My first science café
Essay

From curiosity to conversation: My first science café

Dec. 18, 2024

鈥淲hy was I so nervous? I鈥檇 spoken in hundreds of seminars and classes, in front of large audiences.鈥 But this was the first time Ed Eisenstein was explaining his research 鈥渢o a crowd of nonscientists relaxing over food and drink at a local tavern.鈥

鈥極ne word or less鈥
Essay

鈥極ne word or less鈥

Dec. 18, 2024

For a long time, Howard Steinman thought this phrase was a joke: 鈥淟ess than one word is no words, and you can't answer a question without words.鈥

Can we make grad school more welcoming for all?
Essay

Can we make grad school more welcoming for all?

Dec. 11, 2024

The students and faculty at most of the institutions training the next generation of STEM professionals do not reflect the country鈥檚 diversifying demographics, leaving a gap in experience and cultural understanding.

I am not a fake. I am authentically me
Essay

I am not a fake. I am authentically me

Dec. 5, 2024

Camellia Moses Okpodu explains why she believes the term 鈥渋mposter syndrome鈥 is inaccurate and should be replaced.

Where do we search for the fundamental stuff of life?
Essay

Where do we search for the fundamental stuff of life?

Dec. 1, 2024

Recent books by Thomas Cech and Sara Imari Walker offer two perspectives on where to look for the basic properties that define living things.