Structured Reviewing - Administration and reporting

The moderator administrates every step of the review process. What is sent when to whom and who responded on what date? Who needed a reminder and is a reviewer mandatory or optionally required to perform a review? These parameters can be gathered without the moderator having to bother others with it.

Some meta-data should be gathered as well; this must be done very carefully since it will impact the reviewers or authors by asking them extra information. A good example of  such ‘extra’ meta data is the amount of time spent on the review/rework. This value provides lots of useful information, like

  • the average amount of time per page needed to review;
  • time spent vs saved;
  • review effectiveness of the individual.

Although a lot can be logged in the administration (and most of it will be useful for the moderator to perform his tasks), not all should be used for reporting.

The rule of thumb is to report on product and process quality only.

Product quality is what a structured review process is initiated for, so that is what should be reported. The reported quality can be combined with some metrics about the process to give the reported product quality some weight. One can imagine that “100% of the comments are reworked and approved by 100% of the reviewers” does not provide any information about the product quality. If there was just 1 reviewer with 1 comment the idea of quality might be different than if there were 25 comments made by 8 reviewers.

In all cases, the reporting should not be related to individuals to prevent (the appearance of) functional assessment.