Sunday 22 August 2010

Reflections on teaching, research, critical review and evaluation [update on 25th August]

These reflections were made in the context of the Brazilian Automatic Control Society, and its technical journal called Control and Automation. They are, however, in my opinion, relevant to the area of control theory (or any other scientific/technical area in (fast) developing countries (the fashionable word is BRIC). The arguments I make are based on the following hypotheses/complaints:
  1. The vast majority of us, researchers in systems and control theory do not send our best papers to our national flagship journal.
  2. Even if we did, we don't have a critical mass large/committed enough to get quality reviews for the submitted manuscripts within the country, taking into account that, as a rule, reviewers abroad are reluctant to spend time and effort in reviewing for an unknown foreign journal. (In the case of Brazil, there are language problems as well, because submissions are allowed in Portuguese and Spanish as well).
  3. Given the problems mentioned in the previous items, it is also usually the case that national flagship journals are often forced to widen their scope in order to have enough articles per issue and end up losing a sharp focus on one particular area. In any case, articles published in national journals get very little readership, citations etc.
  4. The best articles (the ones that get the awards and the most citations/readership) tend to be tutorials or surveys.
  5. The evaluation systems in place at most Government institutions place much more value on research than on teaching. In fact, most of these institutions make no serious effort to evaluate teaching and there are usually no coveted Best Teacher Awards, even though many have Best PhD Thesis awards. Promotions in university careers are based almost exclusively on productivity in research, which is usually measured in terms of quantity and, sometimes using the somewhat dubious citation metrics, such as the h-index (which is undoubtedly good at detecting outliers, but may not be as good in classifying the majority in the middle). Other criteria, such as the relevance of the research being evaluated in a broader context (dare I say social, for example?) are not even discussed.
Given these five hypotheses (for the lack of a better word), and given the fact that teaching, in the sense of educating our successors, is crucially important, how can we start to change things?

Before I give my suggestion, there are a few more observations I'd like to make about scientific/technical journals. In my opinion, the current format of journals, as well as the current review process serve to destroy well-written articles. Specifically, even with electronic versions of all major journals available, most still impose severe (paper) page limitations on the authors. This means that any didactic material, considered too elementary, is usually removed from articles, in addition to reports on instructive mistakes and unsuccessful approaches, which are acknowledged to be almost as important as the reported correct approach. This process inevitably makes the final published product more difficult to read and certainly less didactic. As far as the review process is concerned, we are usually up against the perversity of anonymous reviewers. Let me justify this. Nowadays, with easy access to almost all the scientific output all over the world, journals have become recommended reading lists. Who recommends the articles in the list? The reviewers, of course. Would you read a novel recommended by an anonymous reviewer? Or, for that matter, would you watch a film recommended by someone you never heard of? I don't think so. I believe, therefore, that reviewers should identify themselves and not hide behind anonymity. This would have other benefits: it would lead to a drastic reduction in mean or flippant comments, comments that are not justifiable technically, as well as other aberrations that every one of us has come across in our careers. As for the objection that anonymity allows you to be critical of colleagues without losing their friendship, it seems to me that a colleague who is upset by well-founded technical criticism is not worthy of your friendship.

So what am I trying to get at? I would like to suggest the creation of a Control and System Theory Webzine, which would only exist electronically and would have a strongly didactic focus. It would have, for example, a Wiki format, containing the article, commentary by designated and identified reviewers (in the style of blog posts) as well as commentaries and contributions by readers. By article and contribution I mean more than just the conventional written article or contribution. The community of researchers, students and professors would also be encouraged to submit videos (of an experiment or an inspiring classroom lecture), animations, code in Java, Scilab and so on. Reviews would be critical but constructive and all articles would be accepted and posted (subject, of course, to some minimal filtering to remove spam and offensive material) while awaiting reviews. A hit counter would be implemented for each kind of commentary (text commentary, video commentary, illustrative animation or code) and allow visitors to order articles by their choice of metric, in addition to the usual latent semantic indexing keyword search order. The homepage would, by default, display articles ordered by number of hits, which would mean, in particular, that articles that had been reviewed would appear before unreviewed contributions and more popular (highly accessed articles) would appear before the less popular ones.

At the end of an n-month period, a distributed and distinguished committee of editors would evaluate the 10 (100?) most hit (accessed) articles, in accordance with the various metrics and rank them using some multi-winner voting system. Such a system, properly implemented, would have several advantages. I'd like to mention two specifically. First, a ranking system that is based on the number of technically qualified hits and also on a distributed committee of editors makes good use of the tremendous power of distributed intelligence of the community, taking a leaf out of the book of the enormously successful social networks and Apple's systems of contributed apps for the iPod and so on. Secondly, we would have a way of saving the wisdom of good teachers and good teaching practices for posterity. In our current system, although good teachers exist and quite possibly outnumber the good researchers, there is very little recognition of the former and they often feel unrecognized, even though students acknowledge their importance and, very often, become researchers because of the fascination transmitted by good teachers.

Postscript: This is an edited and contextualized version of a polemic I wrote in Portuguese originally, about a year and a half ago, and posted on this blog early this year [2010]. In the intervening period, two important developments came to my attention. One is Cell Press' Article of the Future, which advertises itself as follows: "An online format that breaks free from the restraints of paper and allows each reader to create a personalized path through the article's content based on his or her own interests and needs." See http://beta.cell.com/index.php/2009/07/article-of-the-future/
Another development is the creation of Rejecta Mathematica, a real open access online journal publishing only papers that have been rejected from peer-reviewed journals in the mathematical sciences. See http://math.rejecta.org/

Further developments, hot off the Web

Scholars Test Web Alternative to Peer Review [this from the super-traditional Shakespeare Quarterly] -- see
http://www.nytimes.com/2010/08/24/arts/24peer.html

Here is a small excerpt: Clubby exclusiveness, sloppy editing and fraud have all marred peer review on occasion. Anonymity can help prevent personal bias, but it can also make reviewers less accountable; exclusiveness can help ensure quality control but can also narrow the range of feedback and participants. Open review more closely resembles Wikipedia behind the scenes, where anyone with an interest can post a comment. This open-door policy has made Wikipedia, on balance, a crucial reference resource. [I couldn't have said it better!]

Publish or post

http://www.the-scientist.com/blog/display/57613/

Small excerpt [to support my argument and exultantly say I told you so! Also more radical, but I am all for it.]

As the news release for LiquidPublication simply states: "Don't print it; post it." To disseminate the information, the program has a software platform that lets other scientists search for what's been posted, leave comments, link related works, and gather papers and information into their own personalized online journals -- all for free.

"I think it's exactly what is needed -- a paradigm shift," said peer-review critic David Kaplan of Case Western Reserve University in Ohio. "This is a different system that utilizes the unique characteristics of the web [to provide] a different way of looking at manuscripts [and] a different way of evaluating them."

The downfalls of the current scientific publishing scheme are no secret, and while many journals are aiming to better it (see The Scientist's feature in this month's issue), their efforts are relatively minor alterations to what many consider a fundamentally flawed system. Now, information engineer Fabio Casati of the University of Trento in Italy and his collaborators are suggesting science publishing try something entirely new, taking full advantage of the rapidly evolving Web 2.0 technology.

They suggest making research -- including formal manuscripts, datasets, presentation slides, and other presentations -- available through the web without any sort of traditional peer-review process. That research would then be searchable and citable by the rest of the scientific community at no cost.

Last update (26th August): Even the Bombay Police are on the web to encourage feedback!



2 comments:

  1. Knowing your dedication Amit for teaching & research across several countries, am delighted to support your ideas. Many thanks

    ReplyDelete
  2. A very good blog. I hadn't thought about named reviewer system. It appears to be a very good idea. Human nature being what it is, it is difficult to tolerate criticism and it will easily lead to retaliation and revenge.

    Art critics are paid and are not contributors themselves. Can we have such critics in science and engineering?

    There are two aspects to the existing publishing process. Firstly many good papers never get published and secondly the ones which get published are so difficult to read or do not have the claimed worth which leads to thousands of researchers wasting millions of hours reading those papers and achieving nothing. Even if the second of the two evil gets fixed it will be a great contribution.

    ReplyDelete