[Corpora-List] corpora-list: publishing lists of accepted and rejected papers

Laurence Anthony anthony0122 at gmail.com
Tue Oct 18 12:30:25 UTC 2011


On Tue, Oct 18, 2011 at 6:48 PM, Krishnamurthy, Ramesh <
r.krishnamurthy at aston.ac.uk> wrote:

>  > “Isn't the danger of such a practice completely obvious?”****
>
> ** **
>
> I’m sorry, Laurence, but I still fail to see the obvious danger. Could you
> please explain?****
>
> Yes, I would argue for total transparency. We are demanding it more and
> more of our politicians and business****
>
> people. So why not academics as well? Their views are often sought by the
> media, so they should also be open to****
>
> scrutiny?
>

Ramesh,

Some of the obvious dangers of releasing lists of rejected papers have
already been stated by others, but here is a quick list off the top of my
head. Note the inclusion of 'may' in every statement below.
1) More people may fail to submit quality papers in fear of being rejected.
2) People who submit papers that are then rejected may lose funding,
position, and/or status at their institutions.
3) Institutions may start using rejection lists as some kind of criterion
for promotion/funding/status.
4) People who develop a history or rejected papers, may find this counts
against them as they attempt to publish better papers.
5) More people with the aim of "dissemination of irrelevant findings,
unwarranted claims, unacceptable interpretations, and personal views" (see:
http://en.wikipedia.org/wiki/Peer_review ) may submit their papers with the
aim of getting their ideas into some public list (whatever the rank).
6) Papers that have been rejected by various journals may be more easily
rejected by other journals simply based on the paper's rejection history.
...

We could probably continue adding to this list of 'possible' dangers ad
infinitum. In a discussion of this kind, it is fine to consider possible
alternatives to the double-blind review system. However, as far as I know,
your proposed alternative has not been tested, so we really don't know if it
is a better system. What we would need to do is test it.

What exactly is your hypothesis? Is it something like this:
Hypothesis: If all rejected papers submitted to a journal are made public,
then reviewers will be less biased in their reviews.
Hypothesis: If all rejected papers submitted to a journal are made public,
then the quality of published manuscripts submitted to the journal will
increase.

If so, and assuming you could get some publisher/conference to agree to the
test, how would you exactly measure 'less bias' or 'higher quality'? Also,
what event or situation would falsify your hypothesis?

Without testing your hypothesis, there is no way of knowing if it really is
better or worse than the current system. Without a test, all we can do as a
field is use our collective common experience developed over several decades
to make a judgement. Based on our current collective experience, we have not
opted for such a system. Instead, we have adopted and maintained the
double-blind reviewing system, and it appears to be working in some sense,
considering the huge advances that have emerged within the field corpus
linguistics. But, double-blind reviewing almost certainly can be improved
on. The question is how.

Laurence.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/corpora/attachments/20111018/1e7a59a4/attachment.htm>
-------------- next part --------------
_______________________________________________
UNSUBSCRIBE from this page: http://mailman.uib.no/options/corpora
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora


More information about the Corpora mailing list