S. Harnad to J. Smith, 5/2/99

On Sun, 2 May 1999, J.W.T.Smith <J.W.T.Smith@ukc.ac.uk> has made some comments and pointed to a proposal, but has not posted the proposal, so I have retrieved it and appended a critique below. But first some replies to Smith's comments:

A separation of the quality control function from the distribution function is what almost EVERY one of the proposals under discussion here recommends. We will get to the specifics of the "Deconstructed Journal" Proposal in a moment. I note here only the apparent inconsistency in the passage above: If we need a system that separates quality control from distribution, why is it a PROBLEM for LANL that it does precisely that: It provides the distribution function independently of the quality control function?

> The LANL model leaves it dependent on journals to validate its content. The very
> journals you condemn for limiting access to knowledge.

Indeed LANL does so, and that is what SEPARATION entails. And I do not condemn journals, I condemn Subscription/Site-License/Pay-Per-View (S/L/P) as the mechanism of cost-recovery -- and of course paper as the means of distribution -- because both restrict access. If LANL provides the access online for free, journals need only provide the quality control, and that can be paid for from page-charges, with no access barriers. Journals continue to exist, but their only function reduces to quality control (and its certification).

The LANL model is based on public SELF-ARCHIVING. No physicists are complaining that their work is being blocked from LANL. Moreover, this is a VIRTUAL world. Central archives can be integrated with local, institution-based ones, via gateways like NCSTRL, and both local and global self-archiving are strongly encouraged, for safety and redundancy. So what is this problem about "centralisation"? What is needed is an Archive that LOOKS like a single, integrated, central resource on the desk of the reader. How it is actually patched together to be safe, redundant, robust, interoperable and upgradeable for posterity is a technical matter that will be handled by the relevant technical experts:

Davis, J. R. and Lagoze, C. (1999) "NCSTRL: Design and Deployment

of a Globally Distributed Digital Library," to appear in Journal of

the American Society for Information Science (JASIS)

<http://www2.cs.cornell.edu/lagoze/papers/NCSTRL-IEEE3.doc>

So central vs. distributed archives is a pseudo-issue: Nothing of substance hangs on it.

As to deciding who may self-archive: Anyone can. But getting the journal quality-control tag is another story: That has to be earned by successfully passing through peer review.

> If everything is safe in a "safely distributed, redundant and mirrored storage
> architecture" why do we need an "LANL-style Archive"

This is a false opposition. It doesn't matter whether authors self-archive locally, and this is then drawn together by a Gateway like NCSTRL, or they self-archive in a global archive. Preferably they should do both (or, better, the intelligent software that draws together the Virtual Archive should do it for them, like making automatic backups and taking them home for safe-keeping).

The word came from Ginsparg, as far as I know. The overlay requires its own proprietary sector of the Virtual Archive, for here it is the Journal that officially certifies papers as refereed. An author can self-archive a paper and self-tag it as "refereed" ("by Journal X"), and that's good enough for most purposes, but in an official overlay, the reader can be sure. (In your own paper, at the URL above, you warn the reader about possible minor discrepancies between that version, your own, self-archived one, and the one that appeared in the Journal: The overlay would contain a version certified as the one that appeared in the journal.)

We'll get to your "Subject Focal Points" Proposal in a moment.

We'll get to your Deconstructed Journal in a moment. For now note that the separation of functions is intentional, and LANL is only meant to provide the distribution half, deliberately.

Your inferences about the logical path to page charges from the separation of function are correct, but they have already been made explicitly early in this discussion. See, for example, in the AmSci Forum, the thread on "The Logic of Page Charges to Free the Journal Literature"

<http://amsci-forum.amsci.org/archives/september-forum.html>

As to this logic leading inexorably to your DJ Model: It would have been useful if you had said in your posting just exactly what your DJ Model was. But as you provided the URL, I read it, and my comments follow below. Unfortunately, the Model seems to be incoherent, making the same incorrect assumptions that have been discussed in the AmSci Forum in the thread on "Independent scientific publication - Why have journals at all?".

<http://amsci-forum.amsci.org/archives/september-forum.html>

In a nutshell, you assume that peer-review can be implemented willy-nilly, you do not appear to realize that competent referees are a scarce resource, and (forgive me if this inference is incorrect) you sound as if you have no prior experience at all with implementing peer review. But the greatest liability of your proposal is that, like the components of the Scholars Forum Proposal that drew the criticism that initiated this discussion, your recommendations are hypothetical and untested variants on peer review to which there is no reason whatsoever to yoke the fate of a free refereed literature: Let us free the current refereed literature and then worry about reforming refereeing.

In <http://libservb.ukc.ac.uk/library/papers/jwts/d-journal.htm> you wrote:

> my new DJ model is a true 'paradigm shift' as described by Thomas Kuhn…..

So the first part of this new "paradigm shift" is that "groups/editorial-boards" will be compiling links. Let me, as reader, reserve the right to pass on this one, and search the literature for myself. But what literature? Before the revolution, I'd like the Refereed literature I already know and trust: Who is doing that refereeing in the DJ, and how?

> * Quality control (Content)…

First, refereeing is not, and never has been, just a matter of giving a "stamp of approval." Referees evaluate drafts and make recommendations, the editor "filters" those recommendations and indicates to the author what needs to be done -- if the material is potentially acceptable -- to make them acceptable, including possible further rounds of refereeing of the revised drafts: this complex, adjudicated interaction is in no way equivalent to giving or withholding a "stamp of approval.."

Forget about paying referees to do this for you: There isn't enough money in the world to make it worth their while. They will do it, as they do it now, for free, if the material is pertinent to their interests and expertise, and they are asked to do so by the reputable editor of a reputable journal, knowing that the author will be unswerable to the editor and the journal.

Now if your SFPs are reputable journals, then we have here simply a re-description of classical peer review, with some new labels (SFP, DJ). If SFPs something else, then we have a pig in a poke -- and no reason to entrust the all-important task of freeing the literature to anything that depends on them at this time.

> Harnad & Hemus (1997) argue strongly for a model where the author or institute pay
> for publication and their arguments are relevant here but they do not clearly separate
> evaluation from 'making available'.

Our proposal was that peer review be paid out of page charges (provided for the author by his institution out of only a small portion of the annual windfall S/L/P savings that such up-front payment make possible, thereby freeing the online literature for all). The underlying logic of this is 100% dependent on precisely the separation of quality control from online archiving.

You are very generous with referees' services. Multiple submission (whether parallel or serial) is already the bane of the current overloaded referee system. You propose to overload it still further.

Articles rejected by one journal are certainly submitted to another (and just about everything is eventually published somewhere), but surely once a paper is accepted ONCE by a journal, no further refereeing is called for. (Open Peer Commentary is another story, and one of the Online Medium's enormous strengths, but that is neither here nor there insofar as the goal of freeing the literature is concerned; and Peer Commentary is not to be confused with Peer Review.)

Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer

Review, Peer Commentary and Copyright.

Learned Publishing 11(4) 283-292.

http://citd.scar.utoronto.ca/EPub/talks/Harnad_Snider.html

A lot of writing improvements (including, soon, with the help of software, markup) can and should be offloaded onto the author, but copy-editing is still part of the quality control implemented by the journal, to ITS standards, not by the author, to his.

Free Online Archiving of the whole corpus is the solution, along with powerful new search tools; no need for SFPs or DJs...

Again, see the AmSci thread on "Independent scientific publication - Why have journals at all?" Peer review is no more a 5-point grading system than it is a pass/fail system. It is an arbitrated, expert-based feedback system for upgrading drafts. Naive notions like this invariably come from the armchair, based on, at most, some individual experience as author and referee; no one who has any experience with what it really takes to implement peer review for a nontrivial population of manuscripts could take proposals like this seriously.

<http://amsci-forum.amsci.org/archives/september-forum.html>

> There are three main problem areas preventing easy adoption of this model. These
> are: community acceptance, funding, and technical.

I am afraid there are even more reasons than that, chief among them being that the "model" is not based on any empirical data.

Stevan Harnad harnad@cogsci.soton.ac.uk

Professor of Cognitive Science harnad@princeton.edu

Department of Electronics and phone: +44 1703 592-582