Constant constipation

After two days here in Trondheim, us new Ph.D. students went to Rotterdam for the quadrennial[1] combined meeting of the Society for Social Studies of Science (4S) and the European Association for the Study of Science and Technology (EASST). Lots of interesting meetings, and a good chance to get to know the others. However, I remember the Presidential Plenary (with the very STS title “‘Acting’ with ‘Innovative Technologies'” – the plenary participants said there should have been quotation marks around “with” as well) was pretty boring. The only interesting contribution came from Steve Woolgar, who talked about surveillance technology and traffic congestion.

The last theme is interesting, because we know so little about why roads suddenly become clogged, and any attempt to solve the problems (building more roads, adding traffic rules and signals, mathematically modelling congestions) fail. Woolgar showed examples of how removing driving regulations actually made traffic flow better. This seems to be confirmed by recent experiments, at least according to this Scientific American article. The idea is that removing information forces drivers to communicate more, and this leads to more cooperation and better traffic flow. Woolgar mentioned a small German city where introducing complete anarchy in traffic led to shorter commutes and fewer accidents.

Similarly, some ideas are coming about how to make traffic flow better. By studying how ants move about, some clues can be found about where the problem might lie:

Dresden University of Technology collective intelligence expert Dr. Dirk Helbing and his team of research scientists set up an “ant highway” with two routes of different widths from the nest to some sugar syrup. Soon the narrower route became congested. But when an ant returning along the congested route to the nest collided with another ant just starting out, the returning ant pushed the newcomer onto the other path. But, if the returning ant came from a congestion-free route, she did not redirect the newcomer. The result was that just before the shortest route became clogged the ants were diverted to another route and traffic jams never formed.

The problem is how to make cars or car drivers communicate the state of the road they are leaving. There might be a solution to this, in using modern communication technology (you know, the Internet) to do the communicating. But the question still remains whether this should be centrally organized and coordinated, or whether the happy anarchy shall reign. Who knew traffic could be this interesting? And I don’t even have a driver’s license (yet)!

[1] Yes, I looked this word up.


Barack who?

Alright, so we’re all already sick of hearing of the Second Jesus over in the US of A, but this was so nifty I had to share it. It’s a composite picture of the inauguration of president Obama, composed from 220 individual photos.

It made me think of an article we discussed in our Handbook review today about scientific imaging. This photo was taken with cameras NASA developed for their Mars Rover robots, and it took a brand new Macbook Pro six and a half hours to compile it. No wonder, it’s 1474 megapixels large. How does that kind of processing fit into the debate over the presentation of scientific images as fact?

Update: I found this picture, and it seemed fitting:

My main man

My main man

Nifty stats

Today I’ve been working on some data material Robert at my department gathered at the annual Research Days. The survey asked school children about their attitude towards and knowledge about climate change. We’ve found some interesting little tidbits that we’ll use for a media article, and maybe delve a little deeper into the material to find something to publish at a later date. There’s actually been some interest from other parties, so a web page will be set up to show some results from the data: Kultwiki [1]. Since the page isn’t up yet, have a graph:

Sorry for the Norwegian here...

Sorry for the Norwegian here...

The question is “Where do you get most of your information on climate change?”, and the answers (from left to right) are: “In school”, “Through the media”, “From my parents” and “From my friends”.

Since I know how much we all love graphs, don’t despair: There’s more where that came from. More to come, too.

[1] I put the address up before it’s finished, since I know that Thomas, who’s responsible for creating the pages, reads this. So now the pressure is on him…

Judge and jury

One side effect of submitting papers to conferences is having to referee other contributions. Since Åsne and I are writing about consumers in the electricity market, we were asked to referee a paper on the possibility for reducing energy consumption through low-level behavior changes. After looking at it, I’m not sure what to do.

The authors are trying to calculate the possible range for energy conservation by identifying a whole range of possible measures (changing lightbulbs, inflating the car tires, turning down the thermostat and so forth), identifying a “range of potential savings” by surveying the literature (22 different studies), and then calculating the “likely range of participation rates”[1]. This will give a measure of the expected savings (the answer? “23 % expected savings”). To test this measure, they then ran a Monte Carlo simulation with a 1000 iterations to see if the measure found in the previous point was more or less on target (end result: 22 % savings with a +/- 29 % interval). Conclusion: If people change behavior, we can save a fourth of our energy. This is then presented as proof that it is not only technology innovation that drives energy conservation.

I find this whole paper to be a bit baffling. To take first things first, it’s very brief, only laying out its test, method, rationale and discussion in sketch form. Still, they do explain what they have done and what their framework is, so I guess the rest can be filled in in time for the conference. The strangest thing is more the idea itself: calculating not the upper limit of conservation, but to the exact percentage how consumers will behave in energy conservation. I just don’t see how this is expected to give probable results. Even if all the assumptions are nicely laid out in the final paper, I still don’t understand how aggregating behavior in this way can produce meaningful results. In addition, they present it as a major finding that there is something to be gained from behavior change. To their credit they do mention that further research into how people are “more than economically rational actors” (hey, that’s what we’re working on!. Is this really dynamite?

So what do we do? I mean, it seems harsh to reject it (the main author, who goes by his nick-name in the references, also work for the US Environmental Protetction Agency), but I don’t see the value in this. Plus, referencing some percentages, asking your colleagues and slapping together a Monte Carlo test seems kind of lazy. But I guess more work can be done to improve it a lot…

[1] How? “In review with [energy policy NGO] professional staff”. They asked their colleagues for their, no doubt professionally founded, best guesses. But still…


I’ve already complained twice about my difficulties finding good graphing software. Turns out the solution was right under my nose. The university has a license for Sigmaplot, which does the job quite nicely. It’s got more options for graphs than Excel, and exports them in much better resolutions. Just look at how the same Excel graph I showed you earlier turned out immediately with this program:

Better, no?

Better, no?

The difference might not seem like much, but this took me a lot less time, and the graph is not grainy at all. So I’m happy. Case closed.

On a lighter note

This made me laugh:

A softer future

This piece caught my eye today. It details a book by Ted Nelson (read the good chapter summaries here, if you can be bothered), who has been trying to influence software design since the 1960s. I can only sympathize, and find myself agreeing with a lot of his complaints about the state of computer applications today. The main point is that the underlying structure of software hasn’t changed since the introduction of UNIX in 1970, and that its way of dealing with data is completely unable to meet people’s true needs.

A lot of this makes sense to me. While the development in hardware has been insane, doubling in strength every 18 months (Moore’s law), software has only been able to make very small improvements on the original architecture [1]. Of course, aesthetically, we are a lot better off than in the 70s, but the same basic limitations are still there. Most users are still unable to do more than a very few basic operations on their computers. It’s probably not nice to use coworkers as examples for these things, but just as I was reading the chapter summaries to the book, a colleague popped in to ask why all the extended symbols on his keyboard had switched places. It’s actually quite simple, but how can you know that when it suddenly happens because you accidentally enter an easy-to-hit key combination?

This applies to all levels of computer applications today, from operating systems (MacOS is a pretty cage with almost no possibilities of modifying, Windows is a bloated mess with serious flaws but more modification possibilities and Linux is either an ugly copy of these two or an ivory tower you need programming skills to master), to basic office programs (Excel and Word are still almost exactly the same as they were 20 years ago), to web browsers (basically a pretty copy of a paper page with some added features). There is no way to easily link information bidirectionally, no way to make connections outside the limits of the current web page or word processor. Yet that is how our mind works. The day I can easily organize information on the web (with links going both forward and backward, and extended, personalized editing possibilities) or in a word document (today, these basically present data as if it were already in its finished, edited and published state) in a way that resembles how I would organize it in my mind is the day that the development in software has started to catch up with Moore’s law. Here’s hoping it happens soon.

[1] See this article for more discussion of software architectural weaknesses.