Measuring Scientific Publishing

This year marks the three hundred and fiftieth anniversary of the first scientific publication, namely, the Proceedings of the Royal Society in 1665. Scientific publication is the preferred form of communication among scientists. The dating of the publication enables a scientist to claim credit for priority in discovery or postulation, and thus be recognized as ‘primus inter pares’ or first among equals scientists desire recognition from their peers rather than the public. Einstein, one of the few scientists recognized by the public, once said “My pencil is wiser than I”, by which he meant that what is claimed as a finding is known only when it is written down. And when we publish them, we allow the scientific community to endorse, dispute or even disprove our claims. That is the way science grows.

This year marks the three hundred and fiftieth anniversary of the first scientific publication, namely, the Proceedings of the Royal Society in 1665. Scientific publication is the preferred form of communication among scientists. The dating of the publication enables a scientist to claim credit for priority in discovery or postulation, and thus be recognized as 'primus inter pares' or first among equals -scientists desire recognition from their peers rather than the public. Einstein, one of the few scientists recognized by the public, once said "My pencil is wiser than I", by which he meant that what is claimed as a finding is known only when it is written down. And when we publish them, we allow the scientific community to endorse, dispute or even disprove our claims. That is the way science grows.
In recent years we have witnessed exponential growth in the number of scientific journals and conferences. The latest phenomenon is Open Access publishing, where the author rather than the reader pays for the costs of publishing a journal. Faced with this bewildering array of journals, scientists may well be confused as to where to publish. One way of ascertaining the quality of a journal is its impact factor, which is defined for a particular year. Publication metrics have now moved on to measuring individual scientists' performance as well. The best known metric is the h-index, so named after its proponent J.E. Hirsch. The h-index is defined, in his own words as follows (Hirsch, 2005): "I propose the index h, defined as the number of papers with citation number >h, as a useful index to characterize the scientific output of a researcher." Google Scholar is a well known search engine that allows authors to create profiles that display this index. It also displays the i-10 index, which is simply the number of papers that have received 10 or more citations. Since Hirsch's paper, there are many others who have tried to fine-tune the h-index to better reflect a scientist's performance. For example Egghe (2006) defined the g-index as follows: "A set of papers has a gindex g if g is the highest rank such that the top g papers have, together, at least g 2 citations"; this gives credit to highly-cited publications, which the h-index does not. There are more recent attempts to account for career length (Abt, 2012) and the effect of co-authors (Harzing, 2008).
Can we 'equalize' the h-index across different disciplines? Czarnecki et al. (2013) offer the following factors by which a scientist's 'raw' h-index should be multiplied in order to get a more equal comparison, depending on the scientist's discipline: Agricultural sciences -1.27; Chemistry -0.92; Clinical medicine -0.76; Computer science -1.75; Economics & Business -1.32; Engineering -1.7; Mathematics -1.83; Physics -1.00; Social Sciences (general) -1.60. Only a selection has been presented above; scientists in fields with factors less than unity will need to obtain higher 'raw' h-indices to be treated on par with those in fields with factors greater than unity. Hirsch (2005) suggests an h-index of 12 as a pre-requisite for tenure (associate professorship) and one of 18 for full professorship in major research universities (with the discipline of Physics in mind).
The h-index values will of course depend on the number of papers 'admitted' into the collection, both from  the researcher's list of publications and from the papers that cite them (Harzing, 2010). Table 1 gives a comparison of six Sri Lankan academics (generally 'good' performers in mid or late career) and their publication statistics as recorded by Scopus and Google Scholar. Scopus appears to be more selective in its choice of sources while Google Scholar is wider, especially in some disciplines. Table 1 is Should conferences be admitted into the corpus for the calculation of publication indices? Google Scholar certainly does, and even the more restrictive Scopus is gradually increasing its conference content (currently around 15 % of the total), especially in areas such as Engineering and Computing & Information Science, where conference publications are known to contribute approximately 45 and 62 % of content respectively; this figure is 43 % for Architecture (Butler, 2007).
While the conversion of quality to metrics is a somewhat reductionist approach, it is likely that even Sri Lankan scientists will be judged by where and how we publish (Atukorale, 2010). In any case, most of the above metrics do measure scientific impact in the sense of how much one's work is useful to other scientists in the common quest for knowledge; and such participation in the global scientific enterprise is certainly every scientist's responsibility. However, these metrics do not necessarily measure economic or other practical impact of research, which may be even more important in developing countries (Rajapakse, 2009). Such impacts are difficult to measure, but attempts have been made (Imperial College Business School, 2015).
Apart from this, scientists who choose to tackle challenging intellectual problems may not be prolific in publishing. Their perhaps potentially seminal contributions may not be cited very much either, especially at the start. It is important however, for us to cultivate this breed of scientists, because they embody the very spirit of science -that of making bold conjectures despite being out of fashion. How else can we find those who will stand in the line of Aristarchus, Copernicus and Huygens?