Over the past few years, the computing-research community has been conducting a public conversation on its publication culture. Much of that conversation has taken place in the pages of Communications. (See http://cra.org/scholarlypub/.) The underlying issue is that while computing research has been widely successful in developing fundamental results and insights, having a deep impact on life and society, and influencing almost all scholarly fields, its publication culture has developed certain anomalies that are not conducive to the future success of the field. A major anomaly is the reliance of the fields on conferences as the chief vehicle for scholarly publications.
While the discussion of the computing-research publication culture has led to general recognition that the "system is suboptimal," developing consensus on how the system should be changed has proven to be exceedingly hard. A key reason for this difficulty is the fact the publication culture does not only establish norms for how research results should be published, it also creates expectations on how researchers should be evaluated. These publication norms and research-evaluation expectations are complementary and mutually enforcing. It is difficult to tell junior researchers to change their publication habits, if these habits have been optimized to improve their prospects of being hired and promoted.
The Computing Research Association (CRA) has now addressed this issue head-on in its new Best Practice Memo: "Incentivizing Quality and Impact: Evaluating Scholarship in Hiring Tenure, and Promotion," by Batya Friedman and Fred B. Schneider (see http://cra.org/resources/bp-memos/). This memo may be a game changer. By advising research organizations to focus on quality and impact, the memo aims at changing the incentive system and, consequently, at changing behavior.
The key observation underlying the memo is that we have slid down the slippery path of using quantity as a proxy for quality. When I completed my doctorate (a long time ago) I was able to list four publications on my CV. Today, it is not uncommon to see fresh Ph.D.'s with 20 and even 30 publications. In the 1980s, serving on a single program committee per year was a respectable sign of professional activity. Today, researchers feel that unless they serve on at least five, or even 10, program committees per year, they would be considered professionally inactive. The reality is that evaluating quality and impact is difficult, while "counting beans" is easy. But bean counting leads to inflation—if 10 papers are better than five, then surely 15 papers are better than 10!
But scholarly inflation has been quite detrimental to computing research. While paradigm-changing research is highly celebrated, normal scientific progress proceeds mainly via careful accumulation of facts, theories, techniques, and methods. The memo argues that the field benefits when researchers carefully build on each other's work, via discussions of methods, comparison with related work, inclusion of supporting material, and the like. But the inflationary pressure to publish more and more encourages speed and brevity, rather than careful scholarship. Indeed, academic folklore has invented the term LPU, for "least publishable unit," suggesting that optimizing one's bibliography for quantity rather than for quality has become common practice.
To cut the Gordian knot of mutually reinforcing norms and expectations, the memo advises hiring units to focus on quality and impact and pay little attention to numbers. For junior researchers, hiring decisions should be based not on their number of publications, but on the quality of their top one or two publications. For tenure candidates, decisions should be based on the quality and impact of their top three-to-five publications.
Focusing on quality rather than quantity should apply to other areas as well. We should not be impressed by large research grants, but ask what the actual yield of the funded projects has been. We should ignore the h-index, whose many flaws have been widely discussed, and use human judgment to evaluate quality and impact. And, of course, we should pay no heed to institutional rankings, which effectively let newspapers establish our value system.
Changing culture, including norms and expectations, is exceedingly difficult, but the CRA memo is a very promising first step. As a second step, I suggest a statement signed by leading computing-research organizations promising to adopt the memo as the basis for their own hiring and promotion practices. Such a statement would send a strong signal to the computing-research community that change is under way!
Follow me on Facebook, Google+, and Twitter.
Moshe Y. Vardi, EDITOR-IN-CHIEF
The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.
The question is: after all this inflationary pressure, are we in a good position to judge quality? Or, for as long as quantity and metric-based assessment are favored by high-level management (let alone the buzz, big data and the like), eventually there will be allies who win the debate? Isn't this something similar to "bad currency drives out good currency"?
While we are all in favor of high quality research publications, the dilemma is that hiring, tenure and promotion committee consist of senior faculty whose definitions of quality may not be sympathetic to the topics and methods used by younger researchers, thereby slowing innovation in our fast moving field. By balancing subjective impressions with objective data on download counts, social media discussion, and eventually citation counts, young researchers can demonstrate the value of their work to skeptical senior faculty.
"We should not be impressed by large research grants, but ask what the actual yield of the funded projects has been." Two thumbs up for this sentence (among others). In an era in which -- at least in Europe -- careers are evaluated mostly for the amount of money one brings in, it was time some very-authoritative scientist raised this issue.
My two cents on the issue "A major anomaly is the reliance of the fields on conferences as the chief vehicle for scholarly publications."
IMHO the main problem is, most CS journals -- some with notable exception -- have reviewing times which are geological eras wrt. the evolution speed of CS (e.g., I've personally experienced 18-months for having the first review) so that journals are often no more considered by a computer scientist as a vehicle to spread the visibility of his/her results, but only as a cumbersome and incredibly slow way of improving his/her own CV.
Displaying all 3 comments
Comment on this article
Signed comments submitted to this site are moderated and will appear if they are relevant to the topic and not abusive. Your comment will appear with your username if published. View our policy on commentsLog in to Submit a Signed Comment
Sign In »
Sign In
Signed comments submitted to this site are moderated and will appear if they are relevant to the topic and not abusive. Your comment will appear with your username if published. View our policy on commentsCreate a Web Account
An email verification has been sent to youremail@email.com
ACM veri�es that you are the owner of the email address you've provided by sending you a veri�cation message. The email message will contain a link that you must click to validate this account.NEXT STEP: CHECK YOUR EMAIL
You must click the link within the message in order to complete the process of creating your account. You may click on the link embedded in the message, or copy the link and paste it into your browser.