Communicating science versus communicating doubts
August 16, 2011 § Leave a comment
An interesting post by Chad Orzel at Uncertain Principles discusses the (lack of) promotion of science by scientists and the (unhappy) consequences for public understanding of science. It seems unarguable to me, judging by the results, that we don’t do enough to explain science to the public. Furthermore, what communication is done is often not done particularly well. Orzel’s piece largely focuses on the aspects of the scientific system that discourage scientists from spending their time on public outreach as opposed to “real science”. I think he’s right, but I also think there’s a deeper problem. Science is complicated, and as scientists we mostly spend our time dealing with issues that are subtle and hard to understand, and where it’s difficult/impossible to have complete confidence that you’ve arrived at the correct solution. On top of which, you’re taught as a scientist that the worst sin you can commit is to gloss over potential alternative interpretations of your data. As a result, scientists trying to communicate what they do and what they know to non-scientists struggle to find the right tone: is it OK to state, as a simple declarative sentence, that HIV causes AIDS? Or do you have to preface your statement with “according to all the evidence, there’s little doubt that…”? Or do you have to plunge into explaining Koch’s postulates and why they apply in the case of HIV? It depends on your audience and what they’ve already heard about the topic. If you read the audience incorrectly and talk at length about the arguments that have been made that HIV doesn’t cause AIDS (and why they’re wrong), you could mistakenly leave the impression that the arguments on both sides are of equal weight. If you misread your audience in the other direction and don’t address a problem that is really worrying them, you could lose their confidence.
Fortunately topics that carry huge weights of controversial baggage in the public domain, such as HIV or climate change or evolution, are in the minority. But the question of what you should try to communicate is always a difficult one. In a one-on-one situation, you can adjust your explanation to the interests and background of the person you’re talking to. In nearly all other situations, you have to make a guess about what your audience can absorb. You can’t successfully get across everything you know. You have a choice of communicating something — much less than you would like to, perhaps — or communicating nothing. Scientists get very little help in making the judgement of what to leave out. In particular, we get little advice on how to deal with doubts. Maybe you’re 80% convinced that x is the case, but y is also a possibility, and there are other potential explanations that seem to you to be more remote but might not be. At what point are you justified in deciding not to discuss y? How do you deal with the fact that as a scientist you suffer from the professional requirement never to be 100% sure about anything, without giving the impression that you’re not sure about anything?
[When I was in my early 20’s I used to write occasional articles for UK newspapers, and I vividly remember struggling with exactly this problem. I think the worst part was imagining that scientists I knew would read the article and believe that I was ignorant about possibility y if I left it out. Even then I knew it was silly to think that way, but the feeling was hard to shake off. I’m sure that many scientists are subconsciously writing for their peers even when they’re ostensibly trying to reach a much wider audience.]
I recently heard an interview on the radio in which a climatologist was being pressed by the interviewer to admit that he couldn’t be certain that climate change was due to human activity. He retorted that as a scientist he had to admit that you can never completely exclude other possible explanations for the data, but that if the question was whether he would bet his house on it, the answer was yes. I think this is an interesting way to distinguish between doubt and disbelief: you need to doubt your conclusions, as a scientist, so that your mind remains open to the possibility that an explanation you haven’t thought of (or think is unlikely) is actually correct. But that doesn’t mean you have to disbelieve your conclusions. There’s an interesting parallel with the standards used in the legal profession: the law has used the standard of “beyond a reasonable doubt” for centuries, distinguishing between absolute certainty on the one hand, and the level of proof that would convince an ordinary person about something important on the other; for example, the level of proof that would lead you to bet your house that it’s real. In law, you need to reach the standard of “beyond a reasonable doubt” for most criminal cases, but you can decide civil cases (in which the consequences of a guilty verdict are usually less severe) based on a “preponderance of the evidence”. We can’t use these standards while we’re doing science, but perhaps something similar is appropriate in communicating science. If you’re talking about a finding that has significant implications for public policy you certainly need to be careful to explain just where the finding came from and what the alternative interpretations might be, but I think you’re also entitled to say that you, personally, are convinced beyond a reasonable doubt (if you are). If you’re talking about something with less weighty implications, my feeling is that your main job is to convey the excitement and challenge of science in whatever form your audience can absorb, no matter how much you have to leave out in order to get the main points across clearly. The tradeoff between clarity and accuracy can be painful, especially when the details of a topic are dear to your heart, but it must be faced, or you may get nowhere.