Science and Pseudoscience

January 18 , 2005

by Laura Dolson

Imagine a group of people who don't know each other, scattered all over the world, each one working on he same new discovery. Perhaps it's a new nutrient - something that we didn't realize was important before. These people may have different ideas about this nutrient and how it works in our bodies. Some may be biochemists, others medical doctors, and others nutritionists. Each may go about the task differently, but they are all committed to finding out the truth. Sometimes they disagree with each other, but they are scientists, and they keep doing their controlled experiments, and bit by bit they understand more. Eventually, if they are successful, we may start hearing about this nutrient in the news.

On the other hand, there may be someone who thinks that they became ill because of something they ate. This food had an additive in it - perhaps a preservative, or a flavoring agent. They become alarmed, and start telling others about their experience. Some other people don't feel too good either, and by gosh, they've eaten processed food with this same substance. Soon a rumor spreads that this additive causes illness. Books and Web sites spring up about the evils of this additive, even though previous experiments haven't uncovered a problem with it. This is called anecdotal evidence. Although stories of this kind can be very compelling, they are not science, although they can give us clues about good issues for further study.

In constructing "proof" for something which originated with anecdotes, the writers of the books and Web sites start to look for reasons that the additive just might be dangerous. They might find some animal studies where mice given huge amounts of the substance had ill effects. They might note that the substance was made in a laboratory, which immediately sounds suspicious to them. They might discover that in developing the additive, atoms of an element that is poisonous were used - conveniently ignoring the fact that many common substances (for example, salt) are made up of elements which would be poisonous if they were in their pure state. And they might cite "references" to "prove" that what they say is true. All of this would be written in a ways which sounds very scientific, but some call this pseudoscience. (Something to think about: it could be said that the process I've just described works BACKWARDS from real science. Can you think what is meant by that?)

How can we tell the difference between science and pseudoscience? It often isn't easy, but it starts with understanding how real science works.

People tend to have two false ideas about science. 1) What they learned in school about science makes it sound like a neat and orderly process. 2) They think that since scientists contradict each other, you can't trust them at all. These ideas reflect common misunderstandings about the scientific process.

The basis of good science is controlled, repeatable and repeated experiments. Observations must be very accurate and detailed, and measurements as exact as possible. This all sounds very neat, but in reality new discoveries are rarely made by one person in isolation in an orderly progression of experiments. When you have a lot of people trying to work at a problem from different angles, it is actually often quite messy, and it seldom proceeds in an orderly way. Every breakthrough is built on a lot of things that didn't work. There has to be lots of testing, lots of experiments to validate other experiments (but usually done slightly differently), and a lot of missteps. Many scientists, often working in places all over the world, must communicate and try to piece information together. The process is at once collaborative (because people have to work together) and often contentious (because different people will have different ideas about how something works). But over time it also produces the strongest results, because ideas that have made it through this refining process tend to be the ones of real value. For this reason, good scientists welcome challenges to their work.

One of the difficulties of communicating this process, which is so often "piece-by-piece", and "two steps forwards, one step back", is that when scientists are in the middle of it, it sometimes doesn't SOUND as real or true as someone who comes along with the latest cure, or the "Food Scare of the Week", who often sounds very certain about what they are saying. For this reason, we must be very watchful when we hear about new scientific information. Not only do we need to separate real science from pseudoscience, but even real science may not be very far along in the refining process, and we must realize that any one "bit" of information along the way may be a false lead or "misstep" in the process.

Questions to Ask about Health Studies

Often when we read about new health information, there will be information about new studies. These are some of the questions you can ask about studies you may read about. Some of them will be easier to answer than others for any given article - they are just things to look for.

1) Was the study published in a scientific journal? This isn't a guarantee of good research, but if it hasn't been published, that's not a good sign. This should be noted in any health article you read. Ideally, any study appearing in a scientific journal meets recognized scientific standards for good research.

2) Who paid for the research? It used to be that most research took place at universities. Recently, however, private companies are more likely to have the money to finance research, which can be costly. This isn't always a bad thing - some important research has been done by industries. But we must be on the lookout for bias. For example, I've noticed several nutritional studies in the last few year or so which were financed by the walnut industry. This is not to say that walnuts aren't nutritious (they are). But when I see a study showing health benefits of walnuts that is financed by someone who will profit from the sale of walnuts, I look VERY carefully at the study, and ask more questions, such as: "What is it about walnuts that may caused the outcome?" and "What other foods similar to walnuts might have similar effects?"

3) Who did the research, and what are their qualifications? Hopefully, the authors of the study have the training and experience necessary to conduct good research. Sometimes, it's not easy to tell. For example, medical doctors receive a lot of education in how to treat diseases, but not much about how to properly conduct research. A medical doctor can get special training in doing research, however.

4) What question was the research trying to answer? Sometimes news reports can muddle what the original question was, or sometimes the question itself wasn't well-defined. We will talk more about this in class.

5) Who were the people in the study? The group of people studied is called a sample. This group comes from a particular population of people. The sample or population being studied can limit how much we can generalize about the results. For example, if a study about exercise had a sample where all the people were Irish women who are professional soccer players, it may not say much about a 60 year old man from Botswana. More work would have to be done to find out how much can be generalized from the study. One real example of this is that for decades almost all the research on heart disease was done with male subjects. Only recently have we been finding out that women not only have different symptoms of heart attacks, but may have some different factors which may contribute to the development of heart disease. (Oops!) Also, you want to be sure that subjects were randomly selected. (We will talk more about this means in class.)

6) How many people were in the study? The larger the sample size, the more likely that the result would be repeated in another experiment.

7) How Well-Controlled Was the Experiment? In other words, what conditions might have affected the outcome of the research, and how did the experimenters address these conditions? How was bias controlled for? How were any questions used in questionnaires validated? Was there a control group? These are a lot of questions! But they are important ones. We will talk about them in class.

A word about animal studies. In medical research, animal studies hold an important place. Almost all safety studies on drugs, food additives, and supplements are done on animals before humans. In interpreting animals studies, however, there is a trap that is easy to fall into: People tend to dismiss animal studies which give evidence for things they don't want to believe, and, conversely put too much faith in animal studies if they do want to believe the outcome. Beware someone who is trying to make a point using animal studies in this way, either for or against their point.

This week's Web site: Florida Citrus Nutrition & Health Information

Link back to the first article in the series (use it if you need instructions on the assignments)

More Health Articles for Junior High Students