色盒直播

Nobelist backs internal review for papers, ‘trust’ scores for scientists

The ‘best’ scientists lack time for peer review, and academics should be rated for ‘worthy’ papers, argues Dan Shechtman

七月 29, 2019
Dan Shechtman
Source: Lindau Nobel Laureate Meetings

Academics should have their university colleagues review their papers before submitting them to journals, a Nobel laureate has argued, because this is a surer way to avoid damaging scientific “blunders”.

In 2011, Dan Shechtman won the prize in chemistry for discovering quasicrystals, structures that do not repeat themselves, overturning a long-held assumption about crystals.

It took him two years to get the results published in a peer-reviewed journal, only to met by scepticism from some when his paper appeared in 1984. He was branded a “quasi-scientist” by Nobel laureate Linus Pauling, but ultimately hailed as having made a major breakthrough.

Now distinguished professor emeritus at Technion Israel Institute of Technology and distinguished professor at Iowa State University, he said he was “100?per cent sure” that he was right when he sent his quasicrystal discovery to a journal because it had been through a rare system of internal peer review at the US-based National Bureau of Standards (NBS), the institute where he was based at the time.

“Our monitoring system of bad science is not working very well,” Professor Shechtman told Times Higher Education at the Lindau Nobel Laureate Meeting, an annual conference for prizewinners and young scientists held in southern Germany in July.

The problem lies with the fact that peer review tends to be done solely through journals, he said, which send out articles under consideration to other specialists in the field.

“But the best scientists do not have time for this [peer-reviewing others’ papers],” he argued, “so the ones who have time for it…are not necessarily the best experts.”

This is why papers need to go through an extra layer of internal scrutiny before they are even submitted to journals, Professor Shechtman recommended. Colleagues check his papers before they ever leave the institute, he explained, and he does the same for them.

“They will make my paper better. I?don’t have to pay anything, I?don’t have to put their names on my paper,” he said. This system is “very good”, but is in use “only in very few institutes around the world”, he added.

The NBS – now called the National Institute of Standards and Technology (NIST) – still requires a “thorough internal review before publication”, a spokeswoman confirmed. An author’s work is reviewed by two “technical experts” from NIST staff as well as their entire chain of command, sometimes all the way up to the lab director, she explained.

The point is to make sure that information disseminated by NIST is “presented in a clear, complete and unbiased manner”, the spokeswoman said. Reviewers verify that a paper’s conclusions are supported by the data and observations, but they also polish the writing – by reducing the use of acronyms, ensuring that tables and figures are clear, and checking grammar and spelling, she added.

Such a system “can be copied” by other research institutes, said Professor Shechtman. He did, however, acknowledge that scientists might not want to upset their colleagues by being too hard on their work. “People are not perfect,” he said.

Nor does internal review guarantee that all your colleagues will back you in the face of post-publication criticism. One early sceptic of Professor Shechtman’s quasicrystal work was his own team leader at the NBS, .

In his to scientists at Lindau, Professor Shechtman focused on incidents of “scientific blunder” – supposedly big discoveries that caught the attention of the public but turned out to be illusory under further scrutiny.

Speaking to 色盒直播, he suggested creating a numerical “trust credit” score for scientists, and giving academics higher ratings if they repeatedly publish “worthy” papers.

Relying on a scientist’s “reputation”, as most people do now, is “illusive because there is no number of the reputation”, he said.

Criminal consequences were needed for scientists who commit research misconduct, he added.

“There is no legal system to judge crooks in science. You can do nothing if somebody publishes results that are not only bad science, but forged science.” At the moment, he explained: “You can cheat and cheat and cheat, and even if you are caught nothing will happen to you.”

david.matthews@timeshighereducation.com

后记

Print headline: Nobelist: fight ‘bad science’ with internal peer review

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (7)

Interesting idea. However, how much longer would that take to publish a paper then? Many academics nowadays have already heavy workloads. Now they would have to also review each others papers before submission. Another point... What if there are "inside politics" within a department? I do agree we need to find a way to oust the crooks. This debate is most welcome indeed.
This suggestion seemingly good on the surface might in the end just replace one set of problems with another ! ... and not every place will be an ‘NBS’ or that NBS. Basil Jide fadipe.
No idea why this is news. Nearly every scholar knows to test fly papers before submitting them. We teach this to graduate students. Journals run workshops that tell scholars to get their papers mock reviewed. Many institutions hold brown back seminars internally to do this. It is just a normal part of scientific and institutional interaction.
Academic staff are heavily loaded with teaching and admin duties, does not seem like a thoroughly discussed/debated idea. Whole system of publishing system is failing , we need new/beeter system of getting sciences to the wider audiences. there must be an open debate without involving publication houses. Publication houses have heavily abused the whole system, some manuscripts can take upto year or more from submission date.
“They will make my paper better. I don’t have to pay anything, I don’t have to put their names on my paper,” he said. This system is “very good”, but is in use “only in very few institutes around the world”, he added. Who are these paragons of virtue who work, seemingly, for nothing? In my experience no-one takes internal review seriously, you just get back 'this looks fine' platitudes.
Some research labs publish internal technical memorandum of new results that invite internal comment BEFORE publication. This is just common sense; n o one wants to be associated with a research organization that publishes fake results. Even so, congenital fakers may succeed for a time, causing great damage to once trustworthy institutions as in the case of Jan Hendrick Schon at Bell Labs
'Speaking to 色盒直播, he suggested creating a numerical “trust credit” score for scientists, and giving academics higher ratings if they repeatedly publish “worthy” papers. Relying on a scientist’s “reputation”, as most people do now, is “illusive because there is no number of the reputation”, he said.' This bothers me, even though the idea is, at first sight, a decent one. It's just another metric. And metrics of this type are blunt instruments. They need a process to be created. They don't have any real nuance, and worse: they can be gamed. In some cases, I'd more effort would be spent gaming that score to improve it than perhaps doing the things that were supposed to improve the score in principle. If research assessment exercises have taught us nothing else, it that this behaviour is almost inevitable.
ADVERTISEMENT