Wednesday, March 7, 2012

Scientific Method continued

There once was a great chef who claimed to have a sure-fire recipe for baking great cakes.  The chef constructed his recipe by carefully watching the cake-baking procedures of other great chefs in his native country of Austria.  Chef Popper --that was is name-- argued that anyone who followed his recipe would be guaranteed to bake a great cake.  Amazingly, Chef Popper’s recipe was also extremely simple.  Just three instructions: (1) get ingredients, (2) mix ingredients, (3) place in oven. “A Monkey could bake a great cake using my recipe,” Chef Popper would sometimes say, “so long as it follows the recipe precisely.”  Soon after, other great chefs from around the world agreed that, indeed, Chef Popper had nailed it. “That is exactly the recipe I use when baking great cakes in my country,” other chefs would exclaim, “more or less” (they would sometimes add).  

I concocted this little story because I think it helps to illustrate where we seem to be disagreeing.  Obviously Chef Popper hadn’t provided a recipe, because all of the important details about how to bake cakes are missing.  Let me now explain why the HD “method” is just like Chef Poppers “recipe.”  

Consider a simple example of science in action. Let the hypothesis be that average global temperature has increased over the past century by at least two degrees.  Let the prediction that one derives from this hypothesis be that the average space between tree rings will have increased over the past 100 years in trees that are at least 130 years old. Now, suppose that we go out and core some 130 year old trees to test this prediction. Lo and behold, we find that there is no change in the average distance between tree rings in our sample: the observation “falsifies” the hypothesis.

Should we reject the hypothesis on these grounds? Of course not!  Instead, we identify a questionable auxiliary assumption that was implicitly used to derive the prediction. Let the auxiliary (background) assumption be that, in our sample, soil quality did not change over the past century.  If soil quality did change, then we would not necessarily expect tree ring distances to increase. This leads us to conduct a new experiment on soil quality in our original sample of trees. And so the process goes.
 
“But that’s just the HD method!” you might claim. No, it is not.  If you are inclined to say this, you are like the chef who thinks that Chef Popper is describing your recipe. The reason that the process I just described is not the HD method (under any guise) is because it is not a method. Let me explain why.

Any hypothesis depends on a huge number of auxiliary hypotheses in order to derive a prediction. Importantly, by “derive” I mean  deduce according to logical rules. This is important because we are trying to come up with a method (recipe) that anyone (monkey?) could follow.  

To derive, logically, a prediction from a hypothesis, thousands of other (background) assumptions must be in place.  For instance, take the derivation: if average temp has increased by 2 degrees, then there should be an increase in the distance between tree rings in the sample population. To do this, one must  also assume that:
       
       * The sample had roughly stable nutrients for the last century.
               * The sample was exposed to stable atmospheric gasses.
             * The earth did not undergo a major shift in rotation around the sun.
             * Climate change deniers did not tamper with the evidence.
    etc.

Now, some of these auxiliary assumptions strike us as ludicrous (or at least highly unlikely). But that is part of the point that I am trying to illustrate. The reason that some of these auxiliary assumptions strike you as silly is because you know better – you have what I referred to as discipline specific knowledge in my previous post.

Remember that the name of the game is to come up with a method, the analog of a recipe, that allows one to do good science (or distinguish good from bad science). I am claiming here that in order to do good science, one must know which auxiliary assumptions are reasonable to question in the face of evidence that conflicts with one’s predictions.  The person who tests whether the earth underwent a major shift in rotation, for example, is not doing good science.Since the list of auxiliary hypotheses is indefinitely long, one cannot simply test them all.

Tom argues that Popper's contribution (the philosopher, not the chef) was to identify falsifiability as the mark of science. But that can't be right, because any hypothesis can be rendered unfalsifiable with the right sort of tweaking to auxiliary assumptions. One can always cling to a cherished hypothesis in the face of conflicting evidence by raising questions about the auxiliary hypotheses instead. That is why falsifiability is not an adequate criterion for distinguishing good from bad science. 

In practice, of course, we do not let people get away with indefinite amounts of tweaking to background hypotheses. Sometimes a scientist will hold onto a hypothesis in the face of conflicting evidence; other times she will reject it. The important point is that nothing about the HD method enables one to make these judgments.  How one makes these judgments depends on the information available about the system in question, the range and nature of the assumptions informing one's study, the prevailing theories, etc. That is what I meant when I said that if you want to know how a science works, look at how it deviates from  the HD method. In other words, to understand a science, look at which hypotheses are rejected and which are retained in the face of conflicting evidence.

In my earlier post I called this as “discipline specific” knowledge. Karl seems to have mistaken me for saying that one has to be initiated into a discipline in some formal sense in order to acquire discipline specific knowledge.  He imagines trying to test some sociological hypothesis (I think it was a hypothesis about the tendency for ecologists, unlike most other scientists,  to be wedded to the HD model). Nothing is preventing him from trying. Nothing is preventing me from trying to bake a cake with Chef Popper’s instructions.  Let me know how that works out for you though. 
  
The important point is this. To have a method is to have a recipe.  Chef Popper’s instructions were not a real recipe because they do not provide a set of step by step instructions for baking a great cake.  Similarly, Philosopher Popper did not provide a scientific method. He also did not provide a set of instructions that are adequate for doing science well.  One of the reasons why Philosopher Popper’s instructions are inadequate is because they do not tell you which background assumptions are reasonable to question when predictions are not borne out. There is NO discipline-general method for making these decisions.

I should add that the only original bit of philosophy here is the story. Philosophers of science have recognized these limitations in the HD "method" for decades. I reckon that we have just done a poor job of explaining its limitations to other disciplines. I fear that I might not be doing much better.

6 comments:

  1. So the whole argument against the scientific method boils down to ... the definition of "method"? So would all philosophical objections become moot if we call it the scientific "process", as is suggested by the resource Ryan Gregory linked to (http://undsci.berkeley.edu/article/0_0_0/howscienceworks_03)? They also, probably not by accident, contrast the scientific process 'as the opposite of "cookbook" '.

    ReplyDelete
  2. Method vs. process is a more significant change than you might think. "Process" could be purely descriptive ("this is what most science is like") whereas "method" is prescriptive ("you have to do it this way or it isn't good science"). Proponents of The Scientific Method(TM) very often fall into the "it must be done using this method" camp.

    ReplyDelete
  3. I agree with Ryan that there is a difference between "method" and "process" in that the former is normative (how it ought to go) while the latter is more descriptive (how it tends to go).

    But there is a deeper issue here. I actually don't care what word we use or how "method" is defined. I am just trying to be clear about how I am using my terms. We can also frame the issue by removing this word entirely, and asking how you would answer the following questions:

    In advocating the scientific method, do you (a)consider yourself to be giving something akin to a set of instructions about how one should do science? (b) Take yourself to be offering a criterion for distinguishing good science from pseudoscience?

    If the answer to the first one is something like, "yes, I claim that scientists should follow the four steps of the HD model(see my earlier post) and I criticize them for deviating from these," then my earlier objections pertain to you.

    Similarly if your answer to the second question is something like "yes, falsifiability is a necessary condition for a hypothesis to be scientific". Then the objections also pertain.

    Again, use whatever words you want. I just want to avoid the standard equivocation that happens at this stage. Often, when faced with compelling objections to the HD model, advocates will resort to a weaker position. For example, they will resort to the position that the HD method is just a heuristic that scientists sometimes follow, or the position that falsifiability is sufficient rather than a necessary condition for distinguishing science from psuedoscience.

    As long as we agree that if you accept one of these weaker positions, you have to answer "no" to questions a and b, we are all good.

    ReplyDelete
    Replies
    1. Stefan, based on your previous post, I extracted this section: "Sometimes a scientist will hold onto a hypothesis in the face of conflicting evidence; other times she will reject it. The important point is that nothing about the HD method enables one to make these judgments. How one makes these judgments depends on the information available about the system in question, the range and nature of the assumptions informing one's study, the prevailing theories, etc. That is what I meant when I said that if you want to know how a science works, look at how it deviates from the HD method. In other words, to understand a science, look at which hypotheses are rejected and which are retained in the face of conflicting evidence."

      Do I correctly identify the lack of guidance on judging the results of the strict HD method (hypothesis, prediction, test, results, potential rejection) as the major objection against the (strict?) HD method?

      My initial post (http://www.cottenielab.org/2012/03/fallacy-of-our-misunderstand-of.html) thus pointed out maybe more my own fallacy in understanding the subtleties in the scientific method, which obviously should be addressed before judging other people's understanding. Because the point of my post was to advocate the acknowledgement of the importance of "context" and "discussion of results" in what I will tentatively call the scientific process. After reading the "Understanding Science" website recommended by Ryan, I now recognize that "context" corresponds to a subset of the "Exploration and Discovery" circle, and "Discussion of Results" to a subset of the "Community Analysis and Feedback" circle. But it is obvious that the "Understanding Science" provides a much richer and detailed explanation than my own naive thoughts, so from now on I will refer to that website.

      Do you agree that these other circles (Exploration an Discovery, Community Analysis and Feedback, Benefits and Outcomes) capture some of the processes that essentially judge the results of the strict HD method? I think that the processes in these circles can for instance "tell you which background assumptions are reasonable to question when predictions are not borne out"?

      I would then argue that the way we actually deal with these compelling objections to the HD model is not to resort to a weaker position, but to strengthen the richer position: the HD method (or Testing Ideas sensu Understanding Science) forms "the heart" of a whole process that includes Exploration an Discovery, Community Analysis and Feedback, Benefits and Outcomes. Advocating this understanding of the scientific process (a) does give something akin to pseudocode (http://en.wikipedia.org/wiki/Pseudocode) for scientific investigations, and (b) offers a criterion for distinguishing good science from other-than-science (what is the relationship of what you are doing to Testing Ideas?).

      The weaker positions you identified are then not an issue anymore. Not all scientists follow themselves the HD method (necessary or sufficient condition), but they create ideas that can be tested, or summarize tested ideas through meta-analyses, or inform public policy based on tested ideas. You seem, implicitly or explicitly, to take that richer position when you wrote " to understand a science, look at which hypotheses are rejected and which are retained in the face of conflicting evidence." Or am I missing something in your discussion?

      And my final question then is to flip the whole debate around. Instead of pointing out the limitations of the HD method, can you define science or scientific method or scientific process without this notion of testing ideas? I am pretty sure philosophers will have tried this, so I am really curious what components they have stressed (and if these components are absent in the "Understanding Science" website).

      Delete
  4. Hi all, I'll just begin by introducing myself – I’m a colleague/friend of Karl’s (can I be so presumptuous, Karl?) and a short email exchange sent me to your site and I stumbled on this terrific discussion.
    I’ll begin by saying that I agree with Karl that these comments betray a misunderstanding about the scientific method – the first complaint ignores a fundamental piece of the scientific method (the fishing part) and the two latter complaints confound problems in the method with problems in how people use the method. The fact that a hammer isn’t good for spreading butter on toast doesn’t make it a bad tool. Or, maybe more analogous, the fact that a hammer occasionally gets used to crush a husband’s skull for leaving the top off the toothpaste (he should have seen this coming), doesn’t make it a bad tool.
    Things really got interesting, though, when the issue of ‘What is good science?’ got raised. I’ve heard scientists say that any science that doesn’t test hypotheses is bad science – tell that to the folks who sequenced the human genome. Or that if you test a hypothesis you are doing good science – we’ve all seen trivial and uninteresting hypotheses tested so we know that ain’t true.
    However, I do think it’s possible to identify good science and here’s how – any science that allows me to make better predictions about things I care about than I made before the science was done is good science. Here’s my argument,
    1. The primary (sole?) objective of science to increase our understanding of the natural world.
    2. There is one way and only one way to demonstrate understanding – by making better predictions than we would make by chance.
    3. This implies that our understanding of the world can be measured by the precision and accuracy of our predictions and so, science that improves the precision/accuracy of our predictions meets the objectives of science.
    Of course, increased understanding (i.e. better predictive ability) isn’t the only factor – it also matters how much we care about the better predictions – that is, how important is the study area (will it save lives? Uncover the mysteries of the origins of the universe? Make money?). So, I would say the equation for measuring the quality of science is (How much did the work increase the accuracy/precision of our predictions?) x (How much do we care about the research topic?). This may seem like ‘pie in the sky’ but, I believe, both of these are quantifiable even if only coarsely so.
    So, if you do science that develops a model of how the world (or a piece of the world) works that allows me to make better predictions about something I care about than I did before, you’ve done good science
    If you do science that a. doesn’t allow me to make better predictions but b. does allow me to exclude a variable or variables that could have reasonably been hypothesized to be in the model, you’ve done useful science (I’m not sure it hits the ‘good’ standard).
    If you do work that a. doesn’t allow me to make better predictions about something I care about, b. doesn’t allow me to exclude variables from the model, or c. only allows me to exclude variables that were very unlikely to be important, then you’ve done bad science. Best.

    Jeff Houlahan

    ReplyDelete
  5. I want to avoid the appearance of engaging in a mere terminological dispute here (there are substantive issues that we are getting at). So let me try to respond to Karl without using the M-word -since we seem to disagree about what it should mean, and that *is* a terminological disagreement.

    Think of how most us learned to do long division. We learned a series of precise steps. Follow those steps, you'll you get the right answer.

    Now, compare this to the way that some people learn to trade securities (stocks). In a how-to guide for stock trading one encounters all sorts of pointers. For example, "choose stocks with a low price/earnings ratio," "only buy stocks that pay dividends," or perhaps the most mundane advice: "buy low, sell high".

    I am claiming that the HD method is more like those stock buying tips than it is like the procedure for doing long division. I have offered fairly precise arguments for this, I think, showing that one cannot proceed in science without knowing which auxiliary assumptions to test in the face of conflicting evidence. At most, we can offer advice like, "choose the one that is most questionable." But that is about as helpful as, "buy low sell high."

    So what about Karl's suggestion that the HD model is a pseudocode? Wilkipedia defines this as a high level description of an algorithm. I see where he is going, but no, I think that the analogy is misleading.

    The reason that the HD model is not a pseudocode is because, for a pseudocode, there *is* an underlying algorithm. For reasons I won't go into now, it can be demonstrated that no such algorithm is possible for making inductive inferences. But if that seems contentious to you, let me try another tack.

    Scientists certainly do not follow an algorithm when deciding which assumptions to question. They make fallible, educated guesses. This involves relying on the wealth of knowledge and expertise that a scientists acquires over a lifetime.

    In this respect learning to do science is like learning to trade securities. One acquires a body of knowledge that will hopefully improve one's decision making. But in actuality this process is prone to error. If it was possible to create an algorithm for trading stocks, someone would have probably done so. Likewise, I think that scientific decision making is too difficult (multi-factoral and dynamic) to automate.

    So my argument is this:

    A pseudocode is a simple (compressed) description of a more detailed algorithm. Good science does not follow an algorithm at any level. Therefore, whatever the HD model might be, it is not a pseudocode.

    I will respond to Jeff's interesting argument in my next comment.

    ReplyDelete