you are here: Home WP9: Peer Review

23 -June -2017
SISOBserver Journal
Our blog
Our Twitter
Our Facebook page
Our Mendeley account
Our Flickr page
Our Youtube profile
Our Meneame profile
Our LinkedIn group
Workpackage9: Peer Review PDF Print E-mail

Go directly to textual description

Description of work (Technical Annex of WP9)

implications results description Click to Load Methods and Tools ConceptualMap


WP9 at a glace


WP9 at a glace - Description


WP9 at a glace - Methods


WP9 at a glace - Results


WP9 at a glace - Implications


Deliverables

Deliverable D9.1 Specifications of model, indicators and representations (M8)

Abstract: This report outlines plans for the SISOB study on peer-review. It describes the goals of the study, the hypotheses the study intends to test, the underlying conceptual model, the independent and the dependent variables, the data sources on which the study will be based and the tools that will be used during the analysis. An appendix provides a first draft of plans to protect the privacy of personal data. These plans will be subject to expert review, prior to data collection and analysis.

Deliverable D9.2 Characterization and evaluation of specific author-reviewer networks (M18)

Abstract: We describe a methodology to evaluate possible biases in peer review systems and apply it to a database from FrontiersIn – a large open access publisher. The study failed to find certain “traditional” forms of bias (e.g. gender bias) but provides indications of previously unreported forms of paradoxical bias (e.g. bias of female reviewers against female author). The methods still require improvement – in particular in data cleaning procedures and statistical methodology. Our next study will address some of these issues, helping to create a general purpose tool for the evaluation of peer‐review systems.

Deliverable D9.3 Study of enhanced evaluation (M30)

Abstract: We describe a methodology to detect biases in peer review systems. Our first study considers biases linked to the personal identities of authors, reviewers and their institutions. The second considers reviewers’ and authors’ positions in their scientific communities. We pilot test our methodologies on two databases of review results, one from an open access publishing house with an open review process, and one from a set of computer science conferences, that used classical review. We are currently implementing our methods in a community accessible workbench. Two appendices describe the requirements and implementation.