Blog on Mathemathical Journals


Input in the form of a comment can be submitted by scrolling to the bottom of the blog page and typing into the "Comments" window. Blog articles can be submitted via email to journal.blog@mathunion.org.
The International Mathematical Union and the International Council for Industrial and Applied Mathematics (ICIAM) jointly constituted a Working Group to study the issue of whether and (in the affirmative case) how both organizations should go forward with a Ranking of Mathematical Journals. The Report of the Working Group is available here.

18.11.2011
16:05

BLOG on Mathematical Journals

Mathematical journals are an important and precious resource for our community. They embody, to some extent, our joint "intellectual property". They provide a way for mathematical researchers to distribute the results of their work. The peer reviewing process, when it works well, gives some guarantee to readers that the published papers are worthwhile and (most likely) correct.

Information technology is changing the journal publication landscape in many ways. Some changes are all for the better; for instance, the availability of electronic versions of a paper makes the content much more widely accessible. Other changes are more controversial, or even almost universally condemned. Calls have been made for professional societies to formulate official positions on some of these.

In order to assess the views of the international mathematical community on journal-related issues, IMU and ICIAM have created the BLOG on Mathematical Journals, that will be hosted by the IMU website.

 

An important issue that IMU and ICIAM want to address is JOURNAL RATING.

Publications in journals are a natural basis for the professional regard in which each of us is held. Different journals have different reputations and specializations; these play a role when we pick where to submit our papers.

The past few decades have seen the emergence of "indices" or "factors" that try to quantify this information, by tracking various quantitative statistics. This has resulted in "ratings" of journals, which are then used (and misused) in a variety of ways. Concerns about the methods by which these ratings were computed have led some mathematicians to call for a rating established by mathematicians themselves. In response to this, the General Assembly of the IMU passed in August 2010 the following resolution:

The General Assembly of the IMU asks the EC to create, in cooperation with ICIAM, a Working Group that is charged with considering whether or not a joint ICIAM/IMU method of ranking mathematical journals should be instituted, and what other possible options there may be for protecting against the inappropriate use of impact factors and similar manipulable indices for evaluating research.

The joint Working Group created by the IMU and ICIAM to study this issue has completed its work. The report of the Working Group can be found here. A first public forum on the report, at a minisymposium at the ICIAM conference in July 2011, showed that the issue and the report evoked strong reactions in many different directions. In order to get a wider community response, it was decided to open a blog in which all mathematicians could contribute their views on the recommendations of the report.

We invite the mathematical community to provide their views on the journal rating issue, and on whether IMU and ICIAM should formulate their own rating. Views on how to establish and update this rating would also be welcome.

The blog will be monitored by a group of moderators appointed to this end by the IMU Executive Committee and the ICIAM Board; their names are given below. The moderators will typically limit their intervention to weeding out invective or libelous contributions, as well as contributions that are inappropriate or off topic. Contributors must identify themselves to the moderators, but can request their posting to be anonymous if they wish.

At the launch of this BLOG on Mathematical Journals, only one category has been created (see right column), namely that on Rating. In informal contacts, members of the IMU Executive Committee and the ICIAM Board have found that many mathematicians have concerns about the present practice of journal publishing in mathematics that are not limited to rating. If there is an area that comes up sufficiently often in the community's comments, the moderating group will create a new category that regroups such comments. Different categories can develop over time.

The Moderating Group (appointed by ICIAM and IMU jointly) consists of Doug Arnold, Carol Hutchins, Nalini Joshi, Peter Olver, Fabrice Planchon and Tao Tang, with Peter Olver as chair.

We invite you to submit your comments below!

Ingrid Daubechies, President, IMU
Barbara Lee Keyfitz, President, ICIAM

 

blog on mathematical journals(journal.blog@mathunion.org)
Views: 14766
  •  
  • 36 Comment(s)
  •  
Terry Mills
18.11.2011
00:00
Member, Australian Mathematical Society

Don't judge a book by its cover. The main use of ranking journals seems to be to evaluate an article without reading it. Surely there are better things to do than spend time on ranking journals.

Daryl Daley
18.11.2011
00:00
The University of Melbourne and The Australian National University

I am disappointed in the IMU report concerning Ratings of Mathematics Journals.
In general it fails to address the key issues of the `content' and purpose of
journal ratings.  Essentially, agencies and other users of journal ratings use
these as surrogates for the `quality' of publications of individuals, and this
surrogate is in turn an indicator of the `quality' of the (work or other
characteristic of the) individual concerned.  In other words, an individual's
worth is judged by the company she or he keeps.

Next, what sort of measure is possible of a journal's `worth' or value?  I take
it as axiomatic that such a measure is continuous as opposed to discrete.
Further, while I also take it as axiomatic that it is multivariate rather
than univariate, I am prepared to allow that `worth' may be described by a
range of contributory factors amongst which may be a dominant principal
component when `worth' is subjected to suitable analysis.  This last
observation implies that different users may differ in (implicitly or
explicitly) choosing their weights of these various components to measure an
individual's `worth'.  It can become a matter of convenience to choose a
dominant principal component to measure such a quality (this occurs in
Australia in the construction of a Year 12 aggregate).

I return to the nature of an individual's `worth' as being continuous.
Any `discrete' version of such a measure is intrinsically an approximation,
and as such it incorporates imprecision that is conveniently described as
error.  For example, if it is prescribed that measurements of `worth'
must be -valued, and an assessor judges an individual to be
borderline between 2 and 3, there is no way that this judgement can be
reflected by that assessor; if a number of assessors reach the same
conclusion, and each randomly assigns a value of 2 or 3, then the judgement
may be reflected in such the range of measures.  If however the reporting
scale is expanded to , then the use of  25 (or 24 or 23 or 26 or
27) will more readily reflect the judgement being sought and made.



Tony Roberts
19.11.2011
01:23
Prof

Just because others undertake naive one dimensional rankings, there is no reason for us to follow. The report says there is a need for a ranking. But the case (section 3) appears based purely on the fact that other bodies are doing such naive simplistic 1D ranking.

 

Instead, I strongly support the tenor of the comment, section 10, "The richness and diversity represented by the many mathematical journals is not captured" by any simplistic 1D ranking scheme. There are many aspects of journals, regional, national, focused, broad, review, rapid, new, technological, and so on. To try to place this spectrum onto a 1D scale is not only invidious but also harms innovation and development.

 

Instead of legitimizing the stupidity of a 1D scale, we should be encouraging diversity to cater for all our needs. Instead of falling into line with the naive, we should consistently argue against it by repeating unceasingly that our research is too diverse for any 1D scale.

 

Tony

 

 

 

 

Jean-Paul Allouche
19.11.2011
11:55
Directeur de Recherche CNRS, Paris

Though I fully understand the reasons for creating a homemade ranking, though I fully agree with the opinions about dangers and or stupidities of all other ways of ranking, I am *strongly* against creating a ranking. My reasons are the evident ones: no really serious ranking is possible, but also any ranking is *certainly* going to be misused at some point. An intense campaign of continuous lobbying by mathematicians showing the absurdity of *any* ranking seems to me largely better (after all mathematicians are probably *the* people who can "prove" that absurdity).

Stefan Samko
20.11.2011
00:00
Professor Jubilado, Universidade do Algarve
Dear Colleagues,
thank you for the very important information and opinions in the report of the working group
on www.mathunion.org/publications/reports-recommendations.

In support of the attitude of the working group, all my experience, as well as that of
many my coathors and colleagues shows that Impact Factor approach to the evaluation of
our research in mathematics is dangerous indeed, because it allows for adminstrations, panels
and so on to judge formally knowing nothing about the level of the resarch and its actuality.

Everybody knows this and many words can be written aout that. Let me just call attention to
the fact why many mathematicians continue to publish papers in journals with low or
not so high Inpact Factor.

One of the reason is the following. When you do research in some new area, or probably
not very new, but in an area very quickly and intensively developed at the moment,
you always have a strong competition. Nowadays, there are always strong working groups,
in Europe, Americas, Asia, everywhere, which parallelly do research
in your area. Naturally, the researchers are interested in a quick
publication of their results.

The general rule in journals is: the higher Impact Factor of the journal is, the longer
is your queue there (not in every journal, but in main it is like this). On the other
hand, when your paper, submitted, say in 2007, appeared in the journal in 2010, while a paper
of another researcher submitted, say, in 2008 appeared in 2009 or in the same 2008, then
in fact nobody looks when your paper was submitted, the citation of priority usually goes
with respect to the year of publication (I know exclusions only in the case where the cited
and citing authors were personally familiar with each other).

At the least one decade ago, when the role of the Impact Factor grew, nevertheless many
mathematicians working in competetive areas preferrred to ignore this role, exactly by the
above mentioned reason.

With best regards and wishes of success for the working group,

Stefan Samko

Wolfgang Soergel
20.11.2011
21:22
Professor

I am extremely sceptical about a rating of mathematical journals organized by the IMU/ICIAM.

 

I can see no reason why this scheme of assigning a number

from 1 to 4 to each journal should be misused less than other

already existing simplistic schemes like the impact factor.

I rather think that efforts to argue against these kind of simplistic schemes would be greatly hampered by the fact mathematicians also organize such a rating by themselves.

 

At the same time, member societies of both IMU and ICIAM

edit lots of journals, and it will be difficult to keep their interests out of the rating process, and even if this works, to

convince the outside world and even me myself that the proposed rating is not biased.

 

I am deeply convinced the IMU should stick as much as

possible to support mathematics rather than evaluate it.

If we start with evaluation, where to stop? For example, there are already highly questionable quality measures of individuals out there, like the Hirsch index, and along the lines of the arguments in favour of journal evaluation I fear I could quickly cook up an argument for a rating of individual

mathematicians organized by IMU/ICIAM as well.......

 

 

 

 

 

 

John Harper
20.11.2011
21:50
Emeritus Prof

One of my problems with journal ranking is that applied mathematicians like me publish in many journals that are not mathematical ones, and those journals' rankings depend on their "core" constituency.

 

Another was when an eminent overseas colleague said one of my best papers was in proceedings of an NZ conference.,I suspect because he didn't believe its theory until a student of his confirmed it by experiment. But those proceedings would have had a very low rating among mathematicians.

 

Who was it that said "Deans can count but can't read?"

Andrew Mathas
21.11.2011
03:10
Professor

I have trouble seeing any value in this exercise. As the report from the working party states clearly, bibliometric data provides a "poor proxy" for measuring the quality of research or of a journal.

 

Rather than attempting to create yet another imperfect index I think that the mathematical community would be better served if we put our efforts into overturning the current reliance by administrators and governments on these superficial metrics.

 

The best outcome that the IMU/ICIAM can hope or in creating their own journal ranking system is that this system will be widely adopted to JUDGE the quality of mathematics papers. Is this really what we want?

 

I think that the message that all of us, including the IMU and ICIAM, should be pushing is how rank all of these ranking systems are.

Richard taylor
21.11.2011
03:58
Professor

I would like to associate myself with the compelling comments of Jean-Paul Allouche and Wolfgang Soergel. I can see several downsides to the IMU rating journals, and very little upside.

 

I am rather surprised that the IMU has taken this so far. Maybe instead the IMU could consider what if anything it can do about the unreasonable prices of some mathematics journals that are taking badly needed money from our universities?

 

 

Rimas Norvaisa
21.11.2011
07:58
principal researcher

An open and worldwide discussions on journal ratings is a huge step forward and I welcome it. It seems to me that before making the suggested rating (or in parallel) we should make an experiment by asking mathematicians to rank math journals, say list first 10 top math journals. It may appear that each of us may disagree even on what is the best math journal. That is an experiment may show that the quality is a (highly?) subjective matter. Also depending on various objective circumstance such as the field of mathematics, nationality and many other things. If so then the suggested 1D rating of all math journals may appear to be a useless project.

Arieh Iserles
21.11.2011
10:40
Professor, University of Cambridge

All those criticising the principle of journal ranking are, with respect, missing the point. Journals _are_ being ranked and, whether we like it or not, these ranking are being used by university authorities and funding agencies. We can all raise our voices against it with IMU as the cheerleader, we can remain purer than the driven snow – the simple fact is that our colleagues will not be promoted, our grants will be downgraded in comparison with other subject areas, libraries will waste scarce funding on predatory journals good only at gaming the system and mathematics as a discipline will be funded even worse than it is at the moment...

 

We all agree that Impact Factors, H-factors, eigenfactors etc. are meaningless and open to abuse. The alternative of doing nothing is akin to living in denial. This is precisely why I welcome the IMU–ICIAM initiative. The proposed system is not perfect but it is infinitely better than the alternative of doing nothing and allowing by default IF and other "bibliometric data" damaging the discipline.

 

Incidentally, I foresee one argument of the "IF lobby" and university bureaucrats against the proposed scheme: it is done by "authority", rather than by "objective" criteria like counting or weighing papers. This is of course complete rubbish: the entire mathematical publishing process which, at its best, works wonderfully well, is based on "authority": the opinions of anonymous referees and of editors. I see the proposed scheme as a logical extension of peer review from individual papers to journals.

David Pritchard
21.11.2011
20:15

I think reducing every journal to a single numerical score is a terrible idea. But if you provide lots of metrics then the idea is much more appealing. For example, a common database of publications per year, refereeing delay and publication delay, various citation rankings, etc. is much more useful and much less contentious.

C. R. E. Raja
22.11.2011
06:56

 

I agree with most of you that any kind of rating of indvidual journals is meaningless. As for colleagues who want some kind of rankings to satisfy grant/funding agencies, ICM should insist on our existing method of annonymous refereeing and peer-reviewing for evaluations by funding agencies. Just becasue many other branches are doing absurd methods to evaluate research, I believe and strongly support that ICM should not get involved in such absurditites.

Benoît Kloeckner
22.11.2011
11:18

First, I would like to stress the importance of another problem with journals, already mentioned by Richard Taylor: their price, and maybe more importantly on the long run, the economic model of commercial publishers (in particular bundling, pressure toward e-only). This, in my opinion, fully deserves a category in this blog.

Benoît Kloeckner
22.11.2011
11:36

Second, concerning the use of a ranking, there seems to be an important discrepancy between the proper uses as asserted in the working group's report, and the argument given by Arieh Iserles. My point is that I do not feel that a ranking is really needed for libraries or for deciding where to submit, for example.

 

I am in charge of advising our library's director about which journals to purchase, and our library is not so small (about one hundred payed subscriptions). I feel this is a job that can be conducted without a ranking. Moreover, the choices depend as much on the domains and sub-domains researchers of our lab are interested in than in a intrinsic quality of journals, whatever it means.

 

Concerning the choice of a journal to submit in, while this is not an easy task I do not feel a ranking will be of much help: too many criteria are involved, and anyway one's usually not able to rank its own work so as to match a journal's ranking.

 

This leaves not so many uses for a ranking; whatever disclaimers is put in front of it, I truly think that it would be primarily used for evaluating individuals and department. Such ranking are already planned to be misused by another university in Grenoble (France) where I work, and such (even informal) rankings are misused in this way already by us, mathematicians, as well as by people that evaluate us (on this point I agree with Arieh Iserles). Maybe the core point is to decide whether we want to provide a proxy (probably better than impact factor, but still not satisfactory) for evaluation of research, or we do not and prefer to fight the urge for such a proxy. Note that we feel this urge as well as our deans, if I believe my limited experience in hiring committees. But making a proxy available while pointing out it should not be used in the way we now it will probably be seems not a reasonable option to me.

Jinglai Li
22.11.2011
17:54

Just a minor comment: I think it is almost impossible and very unnecessary to have a complete list of the tier 4 journals. Not to mention that it can cause controversy. It would be good to just list tier1-3 and leave out all the journals that are not qualified. Just my two cents.

 

 

B. Rajeev
23.11.2011
00:00

Firstly, I thank the committee and ICM for initiating this

discussionand for soliciting comments from individual mathematicians.

It may be useful to view the recommendations of the committee  from

the point of view of some specific model of scientific activity, for

greater clarity and focus.

Accordingly I venture to suggest a dynamical model of scientific
activity  ( as distinct from a `spatial model' of science that
incorporates  various disciplines as geographic regions or territories ).
In this  dynamical model, I view science ( and mathematics ) as comprising of
two dialectically interlinked set of activities viz. the creation of
knowledge on the one hand,  and the diffusion of knowledge  on the other .
It is , I believe important , that we keep these two activities
distinct and it is here that I would disagree with the recommendations of
the committee. Publishing in journals is only one step in a cycle of
activities that constitute the process of knowledge creation (or
scientific research) .  Otherwise, it is difficult to
understand the virtual demise of a large number of otherwise decent
articles after their publication in top journals ? Perhaps then,
it would be wrong  to give this stage of knowledge creation any objective
validity (in the form of journal rankings and other ratings  ) beyond what
the referees and the author have agreed on and beyond what the reader
  is willing to acknowledge .

From the point of view of the diffusion of knowledge however , different

forms and formats of publishing are  quite welcome , although it is clearly

desirable to declare in advance the  procedure adopted before the publication

of articles. It is
also perhaps worth noting that journals are not free floating entities ,
but are  part of local scientific and technological research enterprises.
And more importantly there are other linkages ,   particularly local
linkages, with other disciplines and institutions , the presence (or absence)

of which enhances (or diminishes) the value of a journal quite beyond what an

expert  may deem  appropriate. As such it maybe unwise (if not unscientific)

on the  part of science administrators to ignore these local linkages and

rely only on global rankings to make decisions .

Finally, it may be appropriate to point out here that the problems posed
by the citation index or other such measures is not specific to
mathematics - although we maybe affected to a larger extent , than
other disciplines - but a more general problem of `scientific culture'.
One only has to recall the various proposals for single numerical 

measures of subjects as complex as IQ, poverty  and (more recently) 

the intellectual prowess of universities . Rather than jump into the 

fray ourselves, maybe we should just grin and bear it ?

Gabriele Ricci
24.11.2011
00:00

I agree on most of the critics about the value of impact factors and of
other methods of "objective" rankings. Yet, an evaluation of the "objective"
(formal) sins of mathematical journals might provide authors and readers
with some useful information.

For instance, take the failures of providing authors with referee reports
within a reasonable time or even of an unmotivated correspondence
interruptions. Clearly, knowing this helps an author. It would also help
readers: such a journal will hardly publish surprising discoveries that defy
common wisdom. Likely, it will only publish mere exercises about old notions.

In my experience, the referral of papers of the latter kind hardly suffered
such failures (yet they did not make me proud). On the contrary, I angrily
spent several years to publish good (surprising) papers: most journals were
unable to referee them.

Therefore, though now I am retired, I will welcome IMU if it will attempt to
collect and publish the formal sins of mathematical journals.

Yours
Gabriele Ricci
via G.B. Martini 12
I-20131 Milano, Italy

James Montaldi
25.11.2011
01:28
Reader

I basically agree will almost all points - whether pro or con! In principle it is a bad idea, but pragmatically we have a choice of doing nothing (and being subjected to others' metrics) or doing more or less what this proposal suggests. And if we come up with something less bad, perhaps the same approach will be adopted more widely.

 

I don't think Mathematics (or any single discipline) has enough standing with our Universities or funding agencies to say "it's not valid for us" - if we do we'll end up marginalizing ourselves. If we want to change the system we need to cooperate with other disciplines.

 

In the meantime, my concern with the proposal its attempt to squeeze all journals into 4 classes. The problem is, as so often, with artificial borderlines. I'd be happier with a more continuous system, say a scale of 1 to 20, and even better with error bars. Most journals publish research of a range of qualities: we all know examples of weak research published in "good" journals.

 

J.R. Strooker
30.11.2011
00:00
Dear Committee,
To me it seems a mistake to draft a proposal on how biometric statistics
 should be used, rather than trying to convince authorities that such
 considerations should at most marginally influence choices.  Such choices should
 in principle depend on the judgment of a mathematical peer group. Your proposal
 may lead to a kind of authorised procedure by which various burocracies will be
 only too pleased to proceed. The objections in section 10 of your report are
  milder than necessary, I should think.
Yours sincerely,
Jan R. Strooker
Utrecht University
 
Pascal Auscher
01.12.2011
23:36
Professor of Mathematics, Paris-Sud

I first want to give my experience as a former scientific director for mathematics at the Ministry of Research in France and scientific adviser for mathematics at the AERES (Agence d'évaluation de la Recherche et de l'enseignement supérieur, that is assessment agency for research and higher education) in France. Then , I will give my opinion on this report.

 

I can tell that we never used citation indices in mathematics and never felt the need for it. This agency organizes the assessment of research departments (not individuals). For each lab, it invites a committee of peers to cover the fields of the departments. This committe is in charge of assessing the department from the documents delivered by the department (containing among other things: scientific achievements (including publications, ,...) and scientific projects, to come for a one or two days visit and to write a report. The report is made public at the end of the process on the website of the AERES. The mission letter to the committee does not contain any reference to usage of citation indices.

 

This agency does not provide money. It is up to deciders to fund the department. The peer-review process however constitutes an authoritative (and unique in a period) assessment of the department that is used by all funders. Let me mention that some comparative ranking of the department among all other departments in the field is made on this report. These rankings are also controversial but this is another story. But at least, such rankings are not based on numbers but are more qualititative (Again, see the AERES website for the whole process and explanations).

 

Let me come to the reason why the AERES did not promote usage of citation statistics in mathematics (and other fields like physics, chemistry, computer science, ingeneering). In natural sciences, the IF is not completely ruled out because of its strong presence in these communities, but not imposed as well. On the other hand, AERES tried a ranking of journals in humanities: this has been a very controversial issue because of the tremendous diversity of humanities. It should be mentioned that humanities had no idea of the relative weights of their journals although we do in mathematics.

 

It was not obvious at first that the AERES could go without using citation statistics. Being a publicly funded agency the pressure from the politics required some arguments. The director at the time conducted a one year seminar inviting many different actors pro or con usage of citation statistics.

The conclusions of these sessions were as clear as water (and I can tell the IMU-ICIAM-IMS report had been influential): citation statistics cannot be used to assess at the scale of individuals (which is not in the mission of the AERES), at the scale of a department (20 to 200 in maths), because they do not include marging errors in their measurements. In clear, they are not statistics in the ethymological sense. They are just numbers without a meaning or rather they are numbers with the meaning everyone wants to give: thus, this is not a reliable tool. At the very least, they could be used on large population as evolutive measurements between different dates (in other words, the rate of change has more meaning). A large scale would the entire, if large enough, population of mathematicians in a country (For France 3500, Australia roughly 500). This is not the scope of that agency.

 

Another argument against the usage of citation statistics is the human force that is needed to make it operative as a trustfull tool (if it can be called this way).

 

Let me come to the proposal of the IMU-ICIAM on the ranking of journals. It is necessary that IMU and ICIAM keep having a strong voice against on these issues because my belief is that this is the only way to make stupid usage of rankings history in 10 years or 20 years. So I am surprised by the action IMU-ICIAM is taking and by the conclusions of the report.

And the proposed solution is clearly an "usine à gaz" (a "mess" in short) : 16 monhs to reach a tentative list (not to mention the needed work to make this list evolutive as was said by Thierry Bouché on a posted comment). Second, what is the point of having so many sub-fields? This is useless for the exterior world (for a funding agency this is too many items) and source of fights within the community. Finally, this is not a good way to fight against predator journals. A good (better?) way is the publicity that should circulate to pin point such journals. The risk of "gaming" evoked by the report is not negligible.

 

I recall that IMU had once an annuary of active mathematicians. But what is an active mathematician I was asked several times at the AERES. My answer was the definition of the IMU (applicable to more applied mathematicans as well if the publication mode is adapted): a person that publishes at least 2 articles in peer-reviewed international journals in a period of 4 years. Simple. Efficient with my colleagues (from other fields). I know that this definition may need some revision, but any definition should keep simple. I have no proposal in the direction that IMU-ICIAM is taking on the journal rankings and in fact, think it is useless as proposed. I would rather consider some opposite conclusions. In any case, any output of this think tank should be simple and usable advices for mathematicians in duty at I was: already I can tell that this is not taking the right path either.

 

 

John Ball
03.12.2011
13:36
Professor, University of Oxford

I oppose IMU/ICIAM constructing a ranking list of journals. Some reasons are

1. It would send a signal at the highest international level that it is not what you publish, but where you publish it, that matters. In particular it would tell young people that they should behave in such a way as to maximize their career prospects by attempting to get their papers published in journals as high up the list as possible. But not everyone feels comfortable with such behaviour. Some may feel modest about their own accomplishments even though these may be better than those of other more pushy personalities.

2. The increasing number of mathematicians who publish in interdisciplinary journals could be adversely affected by the lack of corresponding rankings, and deterred from publishing in such journals, which increase the influence of mathematics in the outside world.

3. The implementation of a system to reliably rank journals would be difficult to sustain in the long term, and would involve a major effort by many mathematicians whose continuing enthusiasm to participate could not be taken for granted given the already heavy peer review load on researchers.

4. It would enable those evaluating the research of individuals to justify substituting journal location for peer review, since they could argue that peer review had already been used to construct the journal ranking (a point made to me by a member of the IMU/ICIAM Working Group).

5. The finances of IMU and ICIAM are not sufficiently robust to be able to defend a legal challenge by a large publisher to the rankings.

The best way to combat the unthinking use of impact factors and their abuse (as highlighted in the excellent work of Doug Arnold and Kris Fowler) is to continue to argue the case. There is evidence that this works. For example, the Australian Research Council has abandoned the use of its own journal rankings, while the UK Research Excellence Framework which will be used to evaluate university research in 2013, drew back after consultation from a metric-based assessment, to the extent that David Willetts, the UK Minister for Universities and Science, said in a recent speech (http://www.bis.gov.uk/news/speeches/david-willetts-gareth-roberts-science-policy-lecture-2011) that what counted as regards publications was “quality, quality, quality – not location, location, location”.

&nb

Bruce Rout
04.12.2011
02:08
Professional Researcher

It appears from the number and quality of comments that this issue is very important to the mathematical community. The comments indicate that the present system of peer review, as handled by professional journals, in certain cases, prevents the publication of good work and promotes the publication of bad work, however those terms may be defined. An overview of the comments also shows that there is some confusion in distinguishing the difference between academic authority and Truth itself. Mathematicians are rather interested in Truth: what is it? How can we detect it? Does it exist and does it matter? In a mathematical model done of policing about a decade ago, it was found that integrity could not be maintained unless there was a direct link between authority and accountability. This relates to the problem of having mathematical journals whose integrity is in doubt. Unless there is accountability for the activities of these journals, they can never have a sustainable position of integrity. May I suggest a look at an alternative on-line archive known as viXra which came as a result of similar situations in the physics community. It is an abandonment of attempting to have some arbitrary authority to validate its articles. It simply relegates everything to tier four and the reader can decide for him/her self. However, this detracts from determining a level of credence to the articles and their content. A similar on line journal/archive could be used in which the review process could be left to the public or the mathematical community itself. Unreviewed articles are just that, unreviewed articles. Reviewed articles, with reviewer's comments and identity becomes available to the scrutiny of all. Accountability is then obtained through transparency to the community at large. Personally, I believe that anyone or any body, regardless of their reputation, walks on very dangerous ground should they make any claim to be an authority of Truth regardless of the significance of Truth involved.

Teresa Krick
06.12.2011
00:00

I would like to strongly support Richard Taylor's and Benoit Kloeckner's suggestion of starting a new category in this blog: namely on how to fight against the outrageous rates publishing houses are charging the scientific communities, for a job the scientific community does for them from the beginning to the end.



I recently heard a talk from one of the heads of our (quite new) Ministry of Sciences and Technology in Argentina, who mentioned that the Ministry payed this year around 15 Million US dollars (about 10% of its budget) to guarantee a free access to digital scientific libraries to our public universities and research organisms. 



 

Wolfgang Soergel
06.12.2011
17:10
Professor

What worries me most about this proposed journal rating scheme is the underlying goal to establish a unique

official opinion concerning each journal. I think, if the world organization of mathematicians does this, it's much more dangerous than if outside bodies venture their opinions.

In fact, this goal of establishing a unique official opinion remotely reminds me of the ``Index Librorum Prohibitorum'' abandoned by the Catholic Church in 1965.

Penny Davies
07.12.2011
10:04

I also oppose IMU/ICIAM constructing a ranking list of journals, for the reasons that others have given.

 

An additional reason is that it would inevitably be very damaging to good journals which failed to make it into the top category (e.g. in attracting excellent papers, the willingness of people to serve as editors or referees for a "second class" journal, or retaining library subscriptions).

 

If dishonest manipulation of impact factors or other journal malpractice is felt to be a serious problem in the mathematical sciences, then a simpler solution would be to produce one list of IMU-approved "reputable" journals that offenders would not be admitted to. Presumably even this could be open to legal challenge, but the numbers of potential litigants would be far lower.

Ivar Ekeland,
08.12.2011
19:07
Professor Emeritus, Paris-Dauphine and UBC

I am very strongly against introducing an official rating of mathematical journals, because of my experience as a professor and researcher in economics

 

Most economics departments and business schools I know use some ranking of journals. There are some slight differences across institutions, but the top tier always consists of four or five journals (chosen among Econometrica, Quarterly Journal of Economics, Review of Economic Studies, American Economic Review, and Journal of Political Economy), while the second and third tier comprise a dozen journals each, and the rest are not considered worth publishing in. Most institutions have developed an incentive scheme, whereby publishing in the top tier gives important financial rewards, while publishing in lower tiers gives reduced returns. Tenure and promotion cases state how many papers have been published in which category, which is seen as a measure of quality. Publishing in Econometrica, for instance, will make an enormous difference to an academic career, especially at the beginning. The perverse effects of such a system is now apparent.

- The question of where to publish now is almost as important as what to publish. If you think you have any chance of publishing in Econometrica, say, then you should write your paper accordingly, with an eye to issues that have been discussed in the journal , what earlier papers have said, and who your refereees might be.

- The editorial boards of the five majors now play a role which was originally the one of recruitment, promotion and tenure committees: they rate researchers. By deciding who gets published or not in their journals, they decided who will or will not have an outstanding academic record, instead of simply a good one. But since they all the good papers, they are overburdened, the rate of publication is very small (less than 10%) and choices can be made on very small differences.

This system is of course self-perpetuating. No new journal will break into the field, because no one will risk publishing an interesting article into a journal which is not rated. We end up with concentrating the quality control of the whole field into the hands of a very small group of self-appointed people, mostly from the US and Europe. This cripples in my view the healthy development of economic theory, and I am very strongly against introducing such a system in mathematics.

 

Oesterle Joseph
08.12.2011
21:13
Professor, University Paris 6

Letting IMU and ICIAM rank mathematical journals looks to me as unreasonable as letting financial agencies rate country debts.

 

Do we really want all small libraries simultaneously unsubscribe journals loosing their AAA rating for an AA+ ?

Johannes Huebschmann, Lille 1
10.12.2011
10:21
professor

I am against mathematics journal ranking, for the reasons

given by others. In particular I very much share the views of J. Ball and I. Ekeland.

 

In case the international mathematics community is really forced to pursue this issue, the only reasonable journal rating I can see would be

 

journals meeting the standard

 

journals below standard, e.g.

a journal operating with an open access author pay system

(if ever the community agrees on this kind of criterion).

 

As has been noted already even such a distinction could be open to legal challenge.

 

The proposed rating criteria necessarily raise serious issues:

 

I cannot consistently interpret a phrase like

 

"Peer-review is applied consistently and rigorously, and editorial work is carried out by leading mathematicians."

(Rating tiers, tier 1)

"Papers are generally of high quality."

(Rating tiers, tier 2)

"Solid journal that generally publishes reputable work and follows accepted practice of peer review ..."

(Rating tiers, tier 3).

 

How could a panel possibly get hold of the requisite data

and, even if its members manage to, how could others confirm the correctness of the procedure? How could we decide whether or not a journal has a "carefully run and reliable refereeing system"?

 

For a while I have been on the editorial boards of various journals. I do not know how my fellow editors handle the refereeing process, so how could someone outside the system evaluate that refereeing system?

 

Worse yet, even if we came up with a system of transmitting the requisite data to the panel: As with any kind of evaluation, a certain amount of "psychology" (or cultural prejudice, etc.) as opposed to professional accuracy and

scholarship will entail evaluation bias and we might not even be able to detect this bias - perhaps it will be seen only with

hindsight. Thought is mediated through language and we are, perhaps, e.g. corrupted by the present day "excellence" verbage.

 

Last but not least it has become common to deplore the poor quality of referee's reports. So how could we possibly take a refereeing system that does not function correctly as an evaluation criterion?

 


I cannot resist adding some more thought about evaluation:

 

The DOW was created as a means to help investors take decisions.But if it indicates anything, it hints at the psychology of investors rather than at the values of the shares themselves. Analysts add to confusion rather than clarification.

 

Our so far established evaluation systems rely, among others, on referee's reports and letters of recommendation. But among the many letters I have seen, few are truly appropriate and helpful. The best letter I have ever seen was written by a true master of our field; it was written in a way that the referee was hardly visible. However, many letters

I have seen say more about the referee than about the person to be recommended.

 

Evaluation is often based on secondary criteria:

 

For example, to get promoted, someone who has had ph.d. students is considered to deserve promotion rather than someone who hasn't. Whether or not any of these theses contributed to our field and whether or not the person managed to stay in academia is a secondary issue.

 

A proliferific colleague is considered to deserve promotion rather than a conscientious one. On the promotion committee (at least on those I have been involved in), the content and quality of the papers of the person under discussion plays a secondary role, if any.

 

Evaluation procedures now function essentially online and/or by means of modern technology anyway.This provokes high risk that the content issue slips through the system. People do not write with pencil or ink on paper anymore; this procedure manifestly changes our way of thinking.

 

In a sense, the mathematics literature is, perhaps, closer to philosophy than to natural science. Mathematicians worldwide should try to convince rating agencies or politicians that the metrics they apply (or attempt to apply) to natural science and medicine risk to destroy our field altogether. The mathematics literature is unique in that is transmits proofs. We still accept many of the proofs

conceived by the Greeks. But a non-mathematician, in particular a politician or rating agent, will not have the slightest understanding of the significance of the notion of proof.

 

On the long run, a good solution would be, perhaps, a single mathematics journal worldwide with a homogeneous evaluation process of individual papers.Anyway a more radical discussion about our journals and related issues is called for.

 

 

Dorin Cheptea
13.12.2011
17:37

In my opinion, the central issue is not how high ranked is the journal in which an article is published, but how certain can one be that the results are correct. How many people have read the article before it was published? Did they spend at least 30 minutes per page? What if the reviewer is not an expert in all aspects, can he say "I am not qualified to check section 3.4 Would someone else do this?" In my opinion, the top journals should publish articles only after the author has been invited to deliver a talk and answer any questions for at least an entire day. And the next tier should publish only after the editors are reasonably certain about the correctness of the results, under the reputation of the chief editor or the editor of the issue. Journals which do not bother to check every line of every article should automatically fall into the dust bin.

 

Therefore it seems logical to support something of the kind proposed by Thierry Bouche above.

 

About the "priority" of research when competing authors or teams exist: one has arXiv.org for that. The same answer goes to those lobby asking for the content to be freely available.

Felipe G. Nievinksi
14.12.2011
00:09
PhD Student

IMU should sponsor a "grand challenge" competition, in which different research groups would submit their best solution for the journal ranking problem. It'd be based on a training dataset and an undisclosed test dataset. See, e.g., the Netflix Prize. A cash prize would bring more attention to the problem, but it's not a must. Also, by sticking to predefined deadlines, it'd ensure that real progress is made -- not just talking. It's time to get our hands dirty!

Nilima Nigam
15.12.2011
00:02
Associate Professor

I agree with the philosophy that rank-ordering journals in our discipline is not a sound idea.

 

However, Arieh Iserles raises an important issue in his comment: such rank orderings apparently exist, and we have not acquiesced to the ordering. Policy is being informed and driven by such lists.

 

Specifically, Canadian funding for science disciplines is determined, in part, by the recommendations of the Canadian Council of Academies. The CCA issues a 'State of Science and Technology' report every few years. One of the more important metrics it reports is the 'Average Relative Impact Factor' of the researchers in a given discipline in a given country.

 

This metric, critically, is computed on the basis of what are determined to be 'high impact journals' in a field. Given that funding is allocated between disciplines based on the ARIF factor, the determination of what IS a high-impact journal becomes important.

 

At present, in Canada at least,Mathematics and Statistics are lumped into one category for this exercise. There is more granularity in other disciplines.

 

How are the 'high impact journals' determined? Since neither the IMU nor ICIAM have such a listing, I believe bibliometric information is compiled as it is in other disciplines: by referring to databases (specficiallyThomson-Reuters). It is an interesting exercise to compile a list of the 20 'highest impact' journals in Mathematics and Statistics using such a database. I tried- one has the option of picking a ranking based on several citation indicators. The ordering of the journals obviously changes. Additionally, historically important journals do not appear on this list because articles are not as heavily cited as in other sub-disciplines. Core areas of mathematics are not represented on such lists. How could they be, given the crude nature of the metric?

 

Clearly, I don't endorse the use of such metrics. But the ARIF is being used, so perhaps we need to have input into how it is computed.

 

I have the impression that the recent reductions in funding to Mathematics in Canada are driven to a great extent due to the low ARIF of our discipline. Another CCA exercise is underway, and I'm not sanguine about the outcome with regards to Mathematics. To this extent, an IMU/ICIAM listing of premier journals (they don't even have to be ordered) would help.

Martin Kulldorff
09.01.2012
22:18
Professor

Ball, Ekeland and others have eloquently stated why an official quality ranking of mathematics journals is not a good idea. An additional concern is that it will enable high ranked commercial journals to continue to increase their subscriptions prices, just by owning the title of a high ranked journal. Irrespectively of subscription price, young mathematicians will have to publish their papers in the official list of top rated journals to get promoted, and hence, the libraries will then have to continue to subscribe to those journals.

 

Concurring with Pritchard though, I do think that IMU/ ICIAM could do a great service by compiling other types of journal rankings, serving the stakeholders mentioned in section 3 of the report. Here are some examples.

 

In an environment of skyrocketing journal prices, libraries have to make tough decisions cancelling journal subscriptions. It would be helpful for them to have a list ranking journals by subscription cost in relationship to their size and utilization, using for example the metric of dollars per paper. (Such rankings have e.g. been published in the Institute of Mathematical Statistics Bulletin, 21:4:399-407, 1992). Since papers in some journals are used more than others, a complementary metrics is to rank by dollars per paper views, which is now easily accessible for electronic versions of journals. It is most important to access papers that one need to cite, so a third possible metric is dollars per citation. For small universities with a modest number of mathematicians, there will be a lot of random variability over the years in the utilization of different journals, and hence, for them it is useful to have centrally compiled list, rather than having to rely only on their own user statistics. Of course, these ranks do not reflect the quality of journals, but for a library, the key criterion is to maximize the utility of their journal collection within their budgetary constraints.

 

For authors deciding where to submit a paper, there are several important considerations. While most of them cannot be ranked, IMI/ ICIAM would do a great favor to authors if they compiled a ranking of journals by average time from submission to decision; the average time from acceptance to publication; and the proportion of papers accepted. As an author, it is desirable that the published paper can be read by as many other mathematicians as possible, so another useful ranking is by the total number of subscribers or the number of libraries subscribing to the journal.

 

For journal publishers, editors, editorial boards and reviewers, the above metrics will also be useful . For example, a low ranking on time from acceptance to submission may lead the publisher to decide on more pages per issue, more journal issues per year or a higher rejection rate, or, reviewers may wish to channel their time away from journals with high subscription costs per papers published.

 

Journal rankings are sometimes used instead of paper quality in promotion and research funding decisions. This is silly and very inaccurate. In statistical terminology, the within journal variation is much greater than the between journal variation. To use journal rankings for such decisions would be like judging the age of people by where they live rather than how they look. If the only things I knew about a two persons was that one lived in Scottsdale, Arizona, and the other in College Stations, Texas, I would guess that the former is older, but to avoid the risk of classifying a kindergarten student as older than a university professor, I need to gather person specific information, such as a recent photograph.

 

In summary, for the stakeholders mentioned in section 3, operational and financial metrics are more important ranking criteria than quality. For promotion and funding decisions, journal quality is a poor and dangerous substitute for paper quality, no matter how accurate journal quality is measured.

 

Prof Milan Merkle
09.02.2012
16:30
There are Elsevier and Thomson-Reuters out there: where is Mathematics?

I don't quite remember when it started, because it was not in a certain
date or a certain year. It was advancing gradually and all of us became
aware of it only when it started to be a measure of our personal quality,
ability and finally, our salary. Bureaucrats all over the world liked it,
one single number that can classify scientists, their products and
journals where they publish -- in a unique and easy way. As a consequence,
lives and scientific production of many mathematicians, especially those
in small, emerging and in other ways undeveloped scientific communities --
changed irreversibly.


Before the times when our academic positions started to depend on the
number of articles that we had published in certain "high
rated" journals, we all knew which journals were good, although they had
not been "officially" rated. How did we know that? You see a famous
person's article there, you find interesting stuff, you find hard stuff,
and you see that  those that you consider as being good mathematicians
publish there. That's it!


The problem starts when one wishes to define the terms "good journal",
"good mathematician", "leading mathematician" in a rigorous way.
Especially if you wish to have a total order in the set of journals and in
the set of mathematicians. I am pretty convinced that any strict
definition of these terms would fail in a finite time, much in the same
way as it happened with the definition via now infamous Thomson-Reuters
Impact Factor (IF). This is because if we define any quantitative measure
of quality, then the market forces will work in the direction of raising
this measure in the cost of loosing the quality. We have been  witnessing
this process in recent past and present. It was Thomson company (now
called Thomson-Reuters) that was promoting IF as a measure of quality.
Scientific bureaucracy in many countries liked the idea and adopted  IF as
a convenient unique measure of scientific achievements. Many scientists in
those countries begun to strive  for IF, as their professional life
depended on it: the slogan "publish or perish" turned into "IF or perish".
To answer the emerging need for IFs, commercial publishers like Elsevier
and many others, opened new possibilities to scientists from the "third
world" and everyone else : journals with high IF, offering their services
to IF hunters.  Suddenly, we ended  up with hundreds of  new "highly
cited" mathematicians or those with incredible production compared to
which  Cauchy, Euler, Gauss  and all others that we  remember from the
college would have been  graded as very inferior scientists. It does not
make sense!


While large and developed  mathematical communities have various  degrees
of success in resisting the IFomania, the consequences in emerging and
undeveloped math communities are devastating. While I cordially greet the
initiative of IMU to classify math journals in another way, I have
reserves with respect to the impact of such a measure to changing the
existing criteria of evaluation of mathematicians' production. The idea of
valuations in terms of IF is not only  deeply enrooted in
bureaucratic systems, but it is also supported by  interested parties
(commercial journals, committees for promotions and many --
perhaps majority of ? --  scientists), that it would
take a small revolution to change it. Also, it requires a joint action of
scientists as a whole. Mathematicians make just a small percentage of
scientific community. What if we mathematicians agree to follow the future
outcome of IMU's attempt to classify math journals? Would bureaucracy give
up, would they change the rules only for us? I doubt.


Just as illustrations of how far the IFing is already enrooted, let me
give an example. A friend, professor B,  could have published his work in
its natural place, in journal X that unfortunately
had an IF=0.2. There was another option, journal Y, with an IF=1.1, but
completely out of scope of people that could be interested in B's work.
Each article in journal Y is rewarded with 8 points, whereas the journal X
scores only 3 points. More points means more money in future projects.
Guess what was B's choice?


Another examples of common misbehavior in the epoch of IFomania include
publishing a series of incremental improvements or  publishing duplicates
of one's work with minor changes. A more sofisticated fraud (and still
within the rules of the game) is writing papers in a group (clique) where
everyone cites other members of the group, which is sometimes developed to
the extent that journal's editors can not find anyone else out of the
clique to review the paper.


All these are consequences of IFomania. Any attempt to replace IF with
some other number or an algorithm, would soon yield similar
grave consequences, because there will always be
some participants in the game that will find a common interest in
misbehaving of some sort in order to increase their ratings by collecting
points instead of doing good mathematics.


I personally would like to see again people writing math papers only when
they have something to communicate to the world. Sometimes, if you ask
more you get less, and this is exactly what
happened with the practice of using IF as a mean to increase the quality
-- what it really did to us was just opposite. What we have now are very
good journals in the middle of IF scale,
and some very very bad ones in its top. I hope that the result of IMU's
committee for ranking journals would give us a  picture closer to what we
expect.


A.F. Gualtierotti
17.02.2012
00:00

Hello!
I am against rankings in general and of journals in particular because the only sure way to avoid their inappropriate use is not to have them. Here are a few remarks on the subject.

A journal should have as only purpose the publication of papers on their intrinsic merit (whatever that may mean) and of all the papers of merit that are submitted (in contradistinction to those that would maintain or improve the acquired rank).
Value is a matter of fashion. When I was an undergraduate, I heard an excellent and renowned mathematician claim that there were no longer any worthy problems in analysis (many recent Fields medals are in tha area).

I have been doing reviews for the Mathematical Reviews since the 1970's. What I see is more and more papers that do not seem to be edited (the one I am reading presently, from one highly ranked journal, contains a reference to a theorem in the paper that does not exist). I believe the reason for that situation to be that editors of journals use implicit rankings to value the papers and thus avoid actual evalution of their actual merit (I am aware that that means that rankings already exist: my opinion is, the fewer, the better).

A.F. Gualtierotti, Emeritus, University of Lausanne

Mark C. Wilson
13.03.2012
05:27
Senior Lecturer, University of Auckland

To Martin Kulldorf: try journalprices.com.

Sorry, you must be logged in to comment. Please login or register to comment.

back

Categories

Archive

Copy and paste this link into your RSS news reader

RSS 0.91Posts
RSS 2.0Posts