Journal metrics are an important and controversial mechanism used to determine the quality of scholarly output on a variety of levels. They can be applied at the level of the journal, the institution, or even the individual level, but in each case the intent is the same: to quantify the amount of influence the subject has within the community of its scholarly discipline. As this is a fairly subjective goal, it comes as little surprise that there are quite a number of different methods for calculating this figure, with each one taking a different set of factors into account. Most of these figures are generated by measuring the number of citations the work receives. More metrics have been developed that take into account not only the number of citations to a work, but also their quality. Metrics such as eigenfactor and Scimago Journal Rank (SJR) utilize an algorithm similar to Google's PageRank to give greater credence to articles cited in higher ranked journals (see here for a recent breakdown of these factors). At the author level, the current most talked about figure is the h-index, a ratio between the number of articles an author has published, and the number of times these articles have been cited.

However, many of the above metrics are built around a publishing model most often used by full-time researchers in the sciences; that is, scholarly publication where the article is the most common unit of research, and where citation links between articles can be easily traced. Here at York, the reality of our faculty's publishing is often quite different since in fields such as the humanities or new media faculty may publish in media that are not the standard scholarly journal: books, blogs, and other platforms do not have an accurate method of recording citations in place. The scholarly monograph has long been the yardstick for measuring academic success in the humanities, and its citation patterns have been shown to be significantly different from those of journal articles1.  A recent study found significantly lower rates of citation for monograph works in the fields of religion, history, and economics than in the sciences, and the citation half-life, or amount of time during which these works are being actively cited in the literature, to be shorter as well2. These differences in publishing and citation patterns must be taken into account when any kind of evaluative metrics are used for purposes such as the tenure process or grant approval.