51

Peter J. Rich and Richard E. West

Editor’s Note: The following is an abridgment of prepublication version of an article published in Innovative Higher Education. The only additional change made to the text for this book has been to replace the word impact with influence when talking about the three-tiered framework for evaluating research quality. This reflects more recent thinking from Dr. Rich and myself that influence is a more appropriate word to further downplay the role of formal impact factor statistics.

The citation for the full published paper is as follows:
West, R. E., & Rich, P. J. (2012). Rigor, impact and prestige: A proposed framework for evaluating scholarly publications. Innovative Higher Education, 37(5), 359–371.

A follow-up article discussed how emerging technologies could facilitate collecting better data according to this Rigor, Impact (Influence), Prestige framework.
Rich, P. J., & West, R. E. (2012). New Technologies, New Approaches to Evaluating Academic Productivity. Educational Technology, 52(6), 10–14.

We argue that high-quality publication outlets demonstrate three characteristics. First, they are rigorous, i.e., discerning, critical, and selective in their evaluations of scholarship. Second, they have influence on others in that they are read, cited, and used. Third, by being prestigious, they are well known to other scholars and practitioners, increasing the prestige of the authors they publish and bringing more light and attention to their work and their institutions. These three criteria—rigor, influence, and prestige—have the potential to create a more holistic assessment of the value of a body of scholarly work.

Rigor

High-quality journals are rigorous, meaning they are more critical in their reviews, are more discerning about what they will accept and publish, and apply higher standards for judging quality research than other journals. They question all aspects of an academic study, including theoretical foundations, participant sampling, instrumentation, data collection, data analysis, conclusion viability, and social impact. They make decisions about the quality of research on its own merits: i.e., through blind review by distinguished and experienced peers and editors. Being published in a rigorous journal lends credibility and acceptance to the research because it indicates that the author(s) have successfully persuaded expert scholars of the merits of the Article.

When evaluating the rigor of a journal, authors often consider the acceptance rate as a key indicator. However, judgments based solely on acceptance rates must be made with care because journals calculate their rates differently. Additionally, a lower-tier journal may receive lower-tier quality manuscripts and accept very few of them, resulting in a low acceptance rate but still poor quality publications. Despite these issues, the journal’s acceptance rate may be documented as one measure of rigor. Other indicators of rigor might include a policy of double blind peer review, the number of reviewers, and the expertise and skill of these reviewers and the editorial board, who determine how discerning, rigorous, and selective the journal will be.

Editors are especially of primary importance, as they resolve contradictory reviews and make final determinations of scholarship quality.

Many indicators of rigor are currently already documented and ought to be considered when evaluating the quality of a publication outlet. For example, acceptance rates, review policies, and the number of reviewers may be found on the journal’s website or through bibliographic sources such as Cabell’s Directories (http://www.cabells.com/). It is much harder to document the rigor of the reviewers and editors, and this is ultimately a subjective interpretation. Like all subjective decisions, the best method of verification would be to seek opinions of other qualified scholars in the field to confirm or deny your own.

In collecting evidence of the rigor of a publication outlet, we believe the following questions might be useful:

  • How does the acceptance rate compare with other journals in this specific discipline?
  • How is the acceptance rate calculated, if known?
  • What type of peer review is used? Is it editorial, blind, or double-blind? How many reviewers are used to make decisions?
  • What is known about the quality of the reviewers and editorial board? Are they recognizable to other experts in the field and known for their insights into the research? How rigorous would outside experts believe these reviewers and editors to be?

Influence

Influence refers to how extensively individual manuscripts and publications are referenced by other publications and how much they contribute to the scholarly progress of a discipline. In this article, we are referring only to influence on research and theory development, not on actual practice. Undoubtedly influence on practitioners is an important quality of good scholarship, as it could be argued that true impact is only felt on the practitioner level. However, we do not address practitioner impact, because this framework is focused on criteria for evaluating academic research and theory publications. We can conceive of the possibility of another framework being developed to guide the evaluation of how much influence an academic has on actual practices, with different evidence being presented and analyzed, but that is beyond the scope of this article.

In evaluating the academic influence of a publication, in addition to the International Scientific Indexing (ISI) Impact Factor (, authors might also review the citation statistics provided by Scopus, SORTI Esteem or Q Scores (i.e., a ranking of journals within specific disciplines), as well as Eigenfactor, Immediacy, hindex, and Cited Half-life Scores, which are other indicators of influence based on statistics that represent attempts to avoid some of the bias in the traditional IF. Because these metrics, available through either ISI, Scopus, Publish or Perish (Harzing, 2011), or Google Scholar Citations, are affected by how extensively a journal is indexed in particular databases, it is important to triangulate influence statistics from multiple venues. For example, while ISI has been reported to only index 26% of educational articles indexed by ERIC (Corby, 2001), we have found Google Scholar to typically index most major educational journals, including those that are not indexed in ISI. In addition, Google Scholar indexes non-academic publications and handbooks, which are still often valuable but not indexed in ISI or Scopus. Thus we believe that Publish or Perish, which calculates citations in Google Scholar, is often more meaningful and accurate in its influence ratings for our discipline. This may not be the case for every discipline. As the major citation databases were originally invented to provide a picture of citation metrics in the hard sciences, fields such as chemistry and physics seem to be better indexed in the Thomson-ISI.

Additionally, a journal’s circulation, its publisher’s effectiveness and reach, or the availability of the journal on the Internet indicates its potential for influence (although potential may not be realized). Emerging social networks such as Mendeley (http://mendeley.com) and Academia.edu provide statistics that indicate how often individual manuscripts are searched for or saved to other scholars’ citation databases. Analytic data from social networks, search engines, and publisher downloading statistics could provide an interesting estimate of how much a publication or author is read or sought out by others.

Some non-peer-reviewed outlets have greater influence than those that are peer reviewed. For example, publication in a widely read and cited practitioner outlet can have high influence. In addition, a Publish or Perish search reveals that some highly cited books are more highly cited in Google Scholar than many top journals. Thus while peer review would be a prime indicator of the rigor of a journal, non-peer-reviewed outlets may be able to show high influence, indicating they still have value. This also shows the need to triangulate findings from all three criteria.

In collecting evidence of the influence of a publication outlet, we believe the following questions might be useful:

  • Is the publication indexed in ISI or Scopus? If so, what is the impact rating (ISI) or citation count, h-index, and SCImago Journal Ranking (Scopus)?
  • What are the impact ratings according to Publish or Perish? Here we believe it is useful to use the same time window as that used by ISI or Scopus. So for example, if you typically use the 5-year ISI Impact Factor, then it would be wise to also limit your Publish or Perish search criteria to the last five years to retrieve comparable statistics.
  • What is the open-access policy of the publication outlet? Outlets that embrace open-access delivery have the potential to have more influence, as the articles are more easily found through Internet search engines. However, the open-access nature of a publication outlet is only an indicator that it has potential for greater influence, not that it has necessarily achieved this influence.
  • What is the circulation of the publication outlet? This is also only an indicator of the potential for influence, as many journals are packaged and sold as bundles to libraries, increasing circulation but not necessarily influence. However, greater circulation does indicate the potential for higher viewership and greater influence.
  • Is there any indication that the publication has influence on other scholars? For example, is the book widely adopted as a text for university courses? Is there evidence that the journal is frequently used to influence policy or other research?

Prestige

Prestige is a qualitative judgment about the respect a scholar receives for publishing in a particular outlet. Because it is more qualitative, it is more difficult to evaluate in a promotion dossier or grant application and is perhaps largely a theoretical exercise where scholars honestly question the perceived prestige of a journal where they are considering publication. A possible indication of the prestige of a journal is whether other researchers recognize the journal when asked and whether their intuitive perception is that the journal is of high quality. For example, in the overall field of education, publishing in the Review of Educational Research or the Review of Higher Education is highly regarded because these are prestigious journals, sponsored by major professional organizations, and well known among educational scholars from all disciplines.

More quantifiable and objective measures of prestige might be rigorous surveys of scholars in a discipline to gauge their perception of a publication outlet. As an example, several studies have surveyed researchers in educational technology about publications they recognize, read, and respect (e.g. Holcomb, Bray, & Dorr, 2003; Orey, Jones, & Branch, 2010; Ritzhaupt, Sessums, & Johnson, 2011). These studies provide valuable information on the relative prestige of a publication outlet. Other indicators of prestige may be whether the publication outlet is officially sponsored by a large national or international professional organization, whether the publisher is reputable, and whether the editor and editorial board are well known and respected.

Often prestige alone is used to evaluate the quality of a journal, but this can be faulty since journals rise and fall in relative quality and because prestige is often so subjective. Thus many journals that were highly prestigious 10–20 years ago might still be well known even though their rigor and influence have fallen, and new journals that are perhaps not yet well known may still be publishing high-quality research. Prestige, then, can be only one indicator of the quality of the journal to be considered in relation to the other indicators.

In collecting evidence of the prestige of a publication outlet, we believe the following questions might be useful:

  • Are there any published studies investigating the popularity or respectability of publications in this field? If so, is this specific publication outlet listed?
  • How recognizable is the publication outlet to other respected scholars? What is their opinion of its importance?
  • Is the publication published by a well-known publisher? Sponsored by a major professional organization?
  • How well known and respected is the editorial board to other scholars in the field?

Applying the Criteria

In making and then defending our own decisions about where to publish our work, we have attempted to apply these criteria qualitatively—using the metrics and data to inform an inductive decision based on evidence from all three categories. We have found that those outside our field have found it easier to understand our choices because we can justify them by providing data about the relative rigor, influence, and prestige of a particular publication outlet in comparison with other publication outlets in the discipline. This framework has also been helpful within our School of Education, where multiple departments are housed, but where we often need to explain to each other the relative importance of different publication outlets within our specific disciplines. As we sought a framework that would encompass all of the scholarship being conducted within the School, the principles of rigor, influence, and prestige have proven flexible enough to provide a common language that all departments could use, even though the specific pieces of evidence important in each of their disciplines were unique and nuanced.

The following are a few examples of how these criteria could be applied in describing a variety of different publication outlets. Using publications in our own field, we demonstrate how this framework might be used (see Table 1). We have masked the names of the journals to focus our discussion on the framework and evaluation criteria, not the specific ranking of individual journals.

Table 1. Application of the proposed framework to publications in the field of educational technology

Publication Rigor Impact Prestige
#1 8% acceptance rate; peer-reviewed Cites/paper 35.83; h-index 87 (PoP) 1.183 (ISI) Flagship research journal of main professional organization; #1 most prestigious journal in the field (Ritzhaupt et al., 2011).
#2 15–20% acceptance; editorially reviewed. Cites/Paper 19.89 h-index = 63(PoP) Published in and respected by well-known researchers; one of the top 3 most read and implemented publications (Holcomb et al., 2003).
#3 25% acceptance rate; peer-reviewed Cites/paper 3.3; h-index = 22 (PoP) Widely read (Holcomb et al., 2003).
#4 66% acceptance rate; peer-reviewed Cites/Paper 9.71; h-index = 9 (PoP) Less well-known journal.
#5 Open call, peer-reviewed by established leaders in the field. Cites/paper 34.55; h-index = 33(PoP) Used in graduate courses and as a reference for researchers; official handbook for main professional organization.

Decisions on publications such as those represented by #1 and #4 are fairly straightforward. We can see from this chart that Publication #1 scores high in all three categories. As such, we would consider it a top-tier venue for publication. Indeed, we would be hard-pressed to find a scholar in our field that would argue with this evaluation for this journal. On the other end of the spectrum, publication #4 scores relatively poorly in each category, resulting in our own interpretation of a lower-tier outlet for publication.

The difficulty may come in scoring publications #2, #3, and #5. The rigor of #2 appears to be fairly staunch, but it is reviewed only by the editor. However, in relation to its peers, this journal seems to have strong citation numbers. This particular journal is often left out of consideration of measures of prestige because of its lack of blind peer review (Ritzhaupt et al., 2011). However, the leaders in the field regularly use this publication outlet as a venue for publishing new ideas and theories, and consequently this publication is one of the most read in our field (Holcomb et al., 2003). Taken individually, each of the measures we used to rate this publication could be problematic for an external review panel unfamiliar with our field. Taken together, we might rate rigor as mediocre, impact as high, and prestige as high, resulting in an upper, mid-tier publication.

Publication #3 paints a different picture. It has a respectably stringent acceptance rate, but the number of times each article is cited in Google Scholar is low. This may be due to the fact that this publication is viewed as a practitioner journal within our field; and, as such, practitioners are more likely to apply the theories than they are to cite them. Also, in addition to regular research articles, this journal publishes many non-research articles and columns, geared towards informing the members of our professional association. These shorter pieces are indexed in Google Scholar and likely bring down the overall ratio of citations per paper. Finally, this particular journal enjoys high prestige as demonstrated in a survey of important journals in the field, ranking in the top 10 overall. Combining these criteria, our qualitative judgment would be to rate this as a lower, mid-tier publication.

Finally, publication #5 presents an interesting case. It is actually a handbook in the field. As such, it lacks key indicators often used to interpret its worth by those outside the field (i.e. ISI impact factors and acceptance rate). Yet, it is edited by a renowned group of scholars, and the number of times each chapter is cited in Google Scholar is nearly as high as the average number of citations per article in our highly regarded publication #1, demonstrating the high influence of this handbook. It also enjoys great prestige in the field and is used by both novice and experienced scholars. As such, we would rate this as a top-tier publication.

Conclusions

We emphasize that these ideas constitute a proposed theoretical framework for how scholars could make and justify, to those from other disciplines, decisions about where they choose to publish their research. In practice, scholars would still need to engage various sources of data and make sound and well-reasoned arguments for the quality of their publication choices. Even though final judgments about journal quality remain a subjective decision, the framework responds to several of the needs that we identified in current efforts to evaluate the academic quality of publication venues. It is flexible enough to allow for multiple and varied sources of data within the categories of rigor, influence, and prestige. As such, the framework allows for the timely inclusion of new metrics as novel ways of measuring academic quality emerge or evolve. The inclusion of multiple indicators allows the framework to be applied to different disciplines. Finally, it is impossible to use the framework while depending on a single metric as an indicator of quality, which may help scholars avoid this dangerous trap. We do not advocate joining the many indicators into a single metric as that would mask the diverse ways in which a publication contributes to quality scholarship. We also emphasize that this framework provides a common language that can benefit scholars in justifying their publication decisions and assist promotion committees in knowing what questions to ask about a candidate’s publication record. Instead of simply asking what a journal’s impact factor is, we hope that committees would seek or request information on the rigor, influence, and prestige of a candidate’s publication record, leading to a more holistic and accurate assessment.

We welcome discussion about whether these three criteria are the most useful and accurate in evaluating educational technology publication outlets or whether additional criteria might be added to the framework. Engaging in this discussion is critical. If we cannot clearly articulate the criteria for determining the quality of our publication outlets, then others (i.e., promotion committees and funding agencies) will have to draw their own conclusions using metrics and criteria that may be less useful or even inapplicable to our disciplines. Also, we emphasize that we believe these criteria should be applied flexibly, qualitatively, and intelligently in making decisions about scholarship quality. We do not recommend using these criteria uncritically to generate a ranking of journals that “count” and “do not count” since all of these data points can be skewed, manipulated, or changed from year to year. Still, by intelligently triangulating multiple data points, we can make more holistic judgments on the quality of publication outlets and share a terminology for discussing our publication decisions.

Application Exercises

  • What impact did WWII play on the development of instructional design?
  • Find an academic journal and use the framework from this chapter to assess its rigor, influence, and prestige. Based on its merits, would you consider the journal you have found to be a top-tier journal? Explain.

References

Corby, K. (2001). Method or madness? Educational research and citation prestige. Portal: Libraries and the Academy, 1(3), 279–288. doi:10.1353/pla.2001.0040

Harzing, A. (2011). Publish or Perish, version 3.1.4004. Available at http://www.harzing.com/pop.htm

Holcomb, T. L., Bray, K. E., & Dorr, D. L. (2003). Publications in educational/instructional technology: Perceived values of educational technology professionals. Educational Technology, 43(5), 53–57.

Orey, M., Jones, S. A., & Branch, R. M. (2010). Educational media and technology yearbook. Vol. 35 (illustrated ed.). New York, NY: Springer.

Ritzhaupt, A. D., Sessums, C., & Johnson, M. (2011, November). Where should educational technologists publish? An examination of journals within the field. Paper presented at the Association of Educational Communications and Technology, Jacksonville, FL., USA.

question mark   Please complete this short survey to provide feedback on this chapter: http://bit.ly/RigorInfluencePrestige

imageDr. Richard E. West is an assistant professor in the Department of Instructional Psychology & Technology at Brigham Young University, where he has taught since receiving his PhD in research and evaluation methodologies from the University of Georgia. His research primarily centers on learning communities, assessment, communities of innovation, and improving online learning. He is also heavily involved in the development and administration of micro-credential “badges.”

 

imageDr. Peter J Rich is an associate professor in the Department of Instructional Psychology and Technology at Brigham Young University. His interests include helping children to learn how to program, understanding how people learn, and developing instructional materials. Dr. Rich received his PhD in instructional technology from the University of Georgia.

 

all rights reserved

License

Rigor, Influence, and Prestige in Academic Publishing Copyright © 2018 by Peter J. Rich and Richard E. West. All Rights Reserved.

Share This Book