Good science and bad science

“Albert Einstein”, Oren Jack Turner
Princeton, N.J. (1947)

We are living in an age that, for some respects, is dominated by science. This means that we enjoy a lot of improvements to our life as a result of technology driven by science, but above all we can try to understand the world in which we live much better than our ancestors.

Unfortunately, there's a lot of bad science around. I'm not thinking of any ethical implication of science, but of science done in the wrong way. This week we have been offered at the same time two relevant examples of good science and bad science.

 

The good stuff happened at Gran Sasso and CERN. As an unexpected result during the execution of an experiment about neutrinos, scientists apparently detected a beam of particles that traveled faster than the c limit assumed by the relativity. They worked for three years, verified the data at their best and the speed anomaly is still there. So they decided to share the preliminary findings, asking for other scientists in the world to validate or invalidate the experiment, eventually reproducing it. If the experiment result is confirmed, there are chances of a very deep impact into current physics: the relativity theory would need some adjustments and possibly even a major rewrite. For this reason, the news took the world by storm and unfortunately most newspapers, demonstrating their usual ignorance, published a lot of incorrect facts and improper speculations. In contrast, people involved with the experiment have been very prudent, not playing the fanfare; stressing instead that the experiment must be first validated, and only later it will be appropriate to investigate about consequences:

Despite the large significance of the measurement reported here and the stability of the analysis, the potentially great impact of the result motivates the continuation of our studies in order to investigate possible still unknown systematic effects that could explain the observed anomaly. We deliberately do not attempt any theoretical or phenomenological interpretation of the results.

In a word, in this circumstance experimental data came first, models will follow later: evidence is the master (in other cases the model comes first and then people search for evidence about it; that's equally good as soon as the fundamental point is respected: evidence is the master).

I'd be very excited to learn that a wider model than relativity must be searched for. This would open up a new world of discoveries and there's more fun in finding unexpected evidence that something is wrong rather than the n + 1 expected evidence that something is right. Anyway, should I bet some money, I'd pick an ending where somebody finds that the precision of the measurements is coarser than initially asserted and actually c has not been passed. In fact, time and space measurements involved in the experiment are all but simple, because of the required high degree of precision. It's also possible that they discover a new phenomenon that explains the experiment findings without touching relativity. It would be a good compromise for my personal expectations. Whatever the conclusion, this started as a good science story.

The bad stuff happened at NASA. An old satellite, UARS, ended its activity falling to Earth. So far, business as usual. Being larger than the average satellite, a relevant number of debrises was expected to reach the ground instead of being consumed by thermal friction in the atmosphere. Ok, sometimes it happens. Now, they gave a statistical prediction of the places where debrises would possibly impact, as well as some figures about chances that they could be harmful to people. Relaxing figures, so don't worry. Apart from the fact that newspapers were filled with very different numbers (but you could blame newspapers again), in the end the satellite significantly departed from the predicted trajectory and debrises falled to an area that was totally excluded from initial forecasts. Even worse: they said that, while it's currently impossible to predict the impact point with a large advance, they would be able to raise a precise warning about one hour before the final impact. One day after the fact NASA still can't tell us the impact time and area with precision. So, where did the precise warning end up? Back to the drawing board, please. This happens when people make predictions relying upon models without a thorough experimental validation. Models without experimental validation are mostly worthless.

Unfortunately, there's a lot of bad science around, just relying on the output of a computer run as it was the divine word.

 

A final word of sadness. Both stories have been grossly misunderstood by newspapers, the blogosphere, politicians and people in general. The latter story in particular, possibly having immediate consequences, involved Public Safety agencies being alerted on inaccurate premises, raising unnecessary warnings, worrying people and possibly wasting money. Be it good or bad science, a huge, unresolved problem of our time is still the lack of good communications between science and the rest of us.