On metrics

Michael Nielsen has a good essay on the problems of using scientometrics to guide funding in science. While he accepts that any grants process will inevitably utilise some form of valuation of the science, often in unstated terms, he sees three problems with the way metrics are increasingly used (what he calls centralized metrics). One is that metrics create scientific monocultures, or “suppress cognitive diversity”. Second is that they create perverse incentives, and third that they misallocate resources. I’m not entirely sure if these three problems are not different aspects of the same, underlying problem, but the essay is interesting throughout. I quote:

The point of the story of the cosmological constant is not that Einstein was a fool. Rather, the point is that it’s very, very difficult for even the best scientists to accurately assess the value of scientific discoveries. Science is filled with examples of major discoveries that were initially underappreciated. Alexander Fleming abandoned his work on penicillin. Max Born won the Nobel Prize in physics for a footnote he added in proof to a paper – a footnote that explains how the quantum mechanical wavefunction is connected to probabilities. That’s perhaps the most important idea anyone had in twentieth century physics. Assessing science is hard.

This might actually be the take-home message of the piece.


In the media

I’m working on a paper on the media debate around an “electricity crisis” in Mid-Norway, which you’ll hear more about soon, but for now take a look at an interview with my advisor in the Norwegian research website forskning.no. It’s in Norwegian, I’m afraid, but the upshot is that the Norwegian electricity system is not capable of handling large fluctuations in electricity demand.

These days, the debate has moved to the situation in Western Norway, which I believe just goes towards strengthening our argument. After all, nothing has yet changed in Mid-Norway…