Refactoring Impact


by Courtney Quirin

Bert Timmermans

When trying to wrap his head around murky measures of research impact, like the h-index, psychology lecturer Bert Timmermans of the University of Aberdeen can’t help but say: “I don’t think these massive amounts of metrics and measures of impact are really doing science much good.”

Consequently when ResearchGate released their nebulous impact score in 2012, Bert closed down his ResearchGate account immediately and made his profile his go-to account for keeping an online repository, tracking documents views, and linking to other academics and research topics.

“What is really nice about is that it specifically does not have an impact score,” says Bert on why he turned to

To Bert,’s freedom from fuzzy measures of research quality is a key step in changing what he believes is impeding science: an academic climate where only hypothesis-confirming research results lead to publications, leaving all other intelligent or innovative or informative research efforts in the dark.

The reason why these metrics and measures can impede science, says Bert, is this:

“They go together with the whole problem about the increasing difficulty to publish null results. If you make it very difficult for people to get null results or replications out, then it’s very simple: you cannot measure the quality of research by output.”

The current “impact” system quantifies research quality through quantity (i.e. research output), however current measures of quantity are biased or unrepresentative since “much of that quality research may be null findings or replications” which are deemed unpublishable and therefore are excluded when tallying quantity, says Bert.

To further explain his point, Bert educated me on the theory of the “LPU”, the Least Publishable Unit, a witty (and somewhat cynical) term coined by one of his former bosses to describe the publishability of a research effort.

“Let’s just assume that simple replication papers, or papers that fail to replicate an effect or find something, could also easily see the light of day. Then it would also make sense to have some kind of measure of quality through quantity. But if you actually combine the hypothesis-confirmation publication bias with impact metrics, then it’s really detrimental precisely because nobody is actually going to engage in research that is not sure to deliver on the publication side.”

It is from this aversion to engaging in research that will not definitively produce a publication that the concept LPU was born. A high LPU, no matter how good the idea, dissuades many non-tenured and growing academics.

“The scientific output has grown, but whether this is actually also the scientific quality remains to be seen,” adds Bert.

To elaborate on the questionable quality of growing scientific output, Bert tells the tale of his PhD days at the Free University of Brussels.

“When I started to make my first steps into the research world, about 10 or 15 years ago as a beginning PhD student, people would do a PhD and then after a PhD they started publishing about what they had done. They very often had papers which contained five experiments or so, and they had something to say,” says Bert.

But once PhD students began to be pressured to publish prior to finishing their doctorates is when things started to get hairy. Today, without several publications prior to receiving those coveted three letters after your name, Bert says, “You simply either won’t get your PhD or you certainly won’t get your post-doc position or a grant.”

So rather than publish a paper comprised of multiple experiments, eliciting a strong idea of a known effect and where the data fits into the big picture, students are pressured to publish early datasets fast, and consequently have a fuzzy sense of what their results actually mean. The situation is cyclical, encouraged by the case of the LPU. If PhD students and early academics were to hold off on publishing until they accrued multiple related experiments— “until they really identified the effect,” says Bert— then they run the risk of finding null results and consequently no publication. Years of research down the tube, from a reputation and impact-factor stance.

“Nobody wants to run that risk because everything hinges on publication, because publication happens to be the way that research quality has now been quantified. And that’s why I think that this obsession with ‘impact’ of your research is detrimental to science,” says Bert.

Sounds pretty dreary, huh? So what can we do to cure the case of the LPU?

Bert believes some of the solutions can be found through sites like and Psych File Drawer, a psychology-specific online database that houses papers that attempted to replicate some effect but actually couldn’t. These sites allow non hypothesis-confirming papers to see the light of day, providing opportunities for future research efforts to be more effective and productive. Rather than reinvent the wheel over and over again, people accessing these sites learn from each other— seeing what worked and what didn’t. Doing so will hopefully begin to reverse the hypothesis-confirmation bias that dominates publications by showing the value of null results and subsequently allow impact (or quality) to be measured on an even playing field.

Bert’s point is particularly relevant in light of recent academic scandals, like the Stapel Case in which a Dutch social scientist was found to be producing fraudulent results. His incentive? Guaranteed publication in prestigious journals like Science and thus the potential to increase his impact and reputation.

“Some of the effects that Stapel found were notoriously difficult to replicate, but nobody really knew to what degree precisely because these non-replications didn’t see the light of day,” stresses Bert.

While some creative recent proposals to combat this hypothesis-confirming bias and consequent measures of “impact” have been circulating, Bert thinks is a good place to start.

“ gives you a way for all people to just share their papers and see how many people think their papers are worth taking a look at,” says Bert.

Academic Bio:
Bert Timmermans is a lecturer within the University of Aberdeen’s Social Cognition section where he investigates social cognition and consciousness. His work on consciousness, which is fairly fundamental research, involves a range of experiments including computer simulations testing implicit learning of sequences and patterns, and utilizing eye-tracking labs to observe how people interact with virtual avatars via eye gaze. Bert’s social cognition work looks at what happens when people interact with each other rather than simply observe something happening, experiments aimed to aid clinical psychologist in diagnoses.

Bert’s work can be viewed here.

– See more at:



Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de

Estás comentando usando tu cuenta de Cerrar sesión /  Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión /  Cambiar )


Conectando a %s