History and Foundations of Information Science Ser.: Bibliometrics and Research Evaluation : Uses and Abuses by Yves Gingras (2016, Hardcover)

ZUBER (276148)
98.1% positive feedback
Price:
US $25.95
ApproximatelyC $36.16
Returns:
14 days return. Buyer pays for return shipping. If you use an eBay shipping label, it will be deducted from your refund amount.
Condition:
Brand New

About this product

Product Identifiers

PublisherMIT Press
ISBN-10026203512X
ISBN-139780262035125
eBay Product ID (ePID)20038288367

Product Key Features

Number of Pages136 Pages
Publication NameBibliometrics and Research Evaluation : Uses and Abuses
LanguageEnglish
Publication Year2016
SubjectLibrary & Information Science / General, Evaluation & Assessment, Research, Higher
TypeNot Available
Subject AreaReference, Language Arts & Disciplines, Education
AuthorYves Gingras
SeriesHistory and Foundations of Information Science Ser.
FormatHardcover

Dimensions

Item Height0.6 in
Item Weight12.4 Oz
Item Length9.2 in
Item Width6.2 in

Additional Product Features

LCCN2016-014090
Reviews...this is a great first read for anyone new to bibliometrics, and a great resource to anyone established in the field.-- The Bibliomagician --, ...this is a great first read for anyone new to bibliometrics, and a great resource to anyone established in the field.
Dewey Edition23
IllustratedYes
Dewey Decimal020.727
Intended AudienceTrade
SynopsisWhy bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything--teachers, professors, training programs, universities--using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics --aggregate data on publications and citations--has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy., Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything-teachers, professors, training programs, universities-using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics -aggregate data on publications and citations-has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
LC Classification NumberQ180.55.E9G5613 2016

All listings for this product

Buy It Now
Any Condition
New
Pre-owned