Kaleigh Eichel, a student in Knowledge Integration, did research in Churchill for two summers. In addition to her research, she also created a web site translating some of the research in Churchill to the general public. One of the episodes features the projects done by Amanda and Brittany:
Tuesday, March 27, 2012
Tuesday, March 13, 2012
... outside in the Arboretum, with our First Year Seminar class. The (only?) advantage of an early spring this year.
Although Ingrid would probably point out that our discussions, even in the Arboretum,
Although Ingrid would probably point out that our discussions, even in the Arboretum,
"... had been sitting on the land, instead of having roots within it." (Laura Piersol, Canadian Journal of Environmental Education, 2010, 15:198-209)But how to design a university course to have its roots within the land? It is easier to do this in the context of a field course, but with a Monday-Wednesday 8:30-10 am lecture slot? I will have to give this some more thought.
Wednesday, March 7, 2012
There once was a great chef who claimed to have a sure-fire recipe for baking great cakes. The chef constructed his recipe by carefully watching the cake-baking procedures of other great chefs in his native country of Austria. Chef Popper --that was is name-- argued that anyone who followed his recipe would be guaranteed to bake a great cake. Amazingly, Chef Popper’s recipe was also extremely simple. Just three instructions: (1) get ingredients, (2) mix ingredients, (3) place in oven. “A Monkey could bake a great cake using my recipe,” Chef Popper would sometimes say, “so long as it follows the recipe precisely.” Soon after, other great chefs from around the world agreed that, indeed, Chef Popper had nailed it. “That is exactly the recipe I use when baking great cakes in my country,” other chefs would exclaim, “more or less” (they would sometimes add).
I concocted this little story because I think it helps to illustrate where we seem to be disagreeing. Obviously Chef Popper hadn’t provided a recipe, because all of the important details about how to bake cakes are missing. Let me now explain why the HD “method” is just like Chef Poppers “recipe.”
Consider a simple example of science in action. Let the hypothesis be that average global temperature has increased over the past century by at least two degrees. Let the prediction that one derives from this hypothesis be that the average space between tree rings will have increased over the past 100 years in trees that are at least 130 years old. Now, suppose that we go out and core some 130 year old trees to test this prediction. Lo and behold, we find that there is no change in the average distance between tree rings in our sample: the observation “falsifies” the hypothesis.
Should we reject the hypothesis on these grounds? Of course not! Instead, we identify a questionable auxiliary assumption that was implicitly used to derive the prediction. Let the auxiliary (background) assumption be that, in our sample, soil quality did not change over the past century. If soil quality did change, then we would not necessarily expect tree ring distances to increase. This leads us to conduct a new experiment on soil quality in our original sample of trees. And so the process goes.
“But that’s just the HD method!” you might claim. No, it is not. If you are inclined to say this, you are like the chef who thinks that Chef Popper is describing your recipe. The reason that the process I just described is not the HD method (under any guise) is because it is not a method. Let me explain why.
Any hypothesis depends on a huge number of auxiliary hypotheses in order to derive a prediction. Importantly, by “derive” I mean deduce according to logical rules. This is important because we are trying to come up with a method (recipe) that anyone (monkey?) could follow.
To derive, logically, a prediction from a hypothesis, thousands of other (background) assumptions must be in place. For instance, take the derivation: if average temp has increased by 2 degrees, then there should be an increase in the distance between tree rings in the sample population. To do this, one must also assume that:
* The sample had roughly stable nutrients for the last century.
* The sample was exposed to stable atmospheric gasses.
* The earth did not undergo a major shift in rotation around the sun.
* Climate change deniers did not tamper with the evidence.
Now, some of these auxiliary assumptions strike us as ludicrous (or at least highly unlikely). But that is part of the point that I am trying to illustrate. The reason that some of these auxiliary assumptions strike you as silly is because you know better – you have what I referred to as discipline specific knowledge in my previous post.
Remember that the name of the game is to come up with a method, the analog of a recipe, that allows one to do good science (or distinguish good from bad science). I am claiming here that in order to do good science, one must know which auxiliary assumptions are reasonable to question in the face of evidence that conflicts with one’s predictions. The person who tests whether the earth underwent a major shift in rotation, for example, is not doing good science.Since the list of auxiliary hypotheses is indefinitely long, one cannot simply test them all.
Tom argues that Popper's contribution (the philosopher, not the chef) was to identify falsifiability as the mark of science. But that can't be right, because any hypothesis can be rendered unfalsifiable with the right sort of tweaking to auxiliary assumptions. One can always cling to a cherished hypothesis in the face of conflicting evidence by raising questions about the auxiliary hypotheses instead. That is why falsifiability is not an adequate criterion for distinguishing good from bad science.
In practice, of course, we do not let people get away with indefinite amounts of tweaking to background hypotheses. Sometimes a scientist will hold onto a hypothesis in the face of conflicting evidence; other times she will reject it. The important point is that nothing about the HD method enables one to make these judgments. How one makes these judgments depends on the information available about the system in question, the range and nature of the assumptions informing one's study, the prevailing theories, etc. That is what I meant when I said that if you want to know how a science works, look at how it deviates from the HD method. In other words, to understand a science, look at which hypotheses are rejected and which are retained in the face of conflicting evidence.
In my earlier post I called this as “discipline specific” knowledge. Karl seems to have mistaken me for saying that one has to be initiated into a discipline in some formal sense in order to acquire discipline specific knowledge. He imagines trying to test some sociological hypothesis (I think it was a hypothesis about the tendency for ecologists, unlike most other scientists, to be wedded to the HD model). Nothing is preventing him from trying. Nothing is preventing me from trying to bake a cake with Chef Popper’s instructions. Let me know how that works out for you though.
The important point is this. To have a method is to have a recipe. Chef Popper’s instructions were not a real recipe because they do not provide a set of step by step instructions for baking a great cake. Similarly, Philosopher Popper did not provide a scientific method. He also did not provide a set of instructions that are adequate for doing science well. One of the reasons why Philosopher Popper’s instructions are inadequate is because they do not tell you which background assumptions are reasonable to question when predictions are not borne out. There is NO discipline-general method for making these decisions.
I should add that the only original bit of philosophy here is the story. Philosophers of science have recognized these limitations in the HD "method" for decades. I reckon that we have just done a poor job of explaining its limitations to other disciplines. I fear that I might not be doing much better.
Friday, March 2, 2012
Ryan Norris via Tom Nudds sent me a link to this web site, and I think all three of us had the same "What the ..." reaction after reading the entry by Irene Pepperberg, titled "The Fallacy of Hypothesis Testing":
"I was trained, as a chemist, to use the classic scientific method: Devise a testable hypothesis, and then design an experiment to see if the hypothesis is correct or not. And I was told that this method is equally valid for the social sciences. I've changed my mind that this is the best way to do science. I have three reasons for this change of mind."These three reasons are:
- the importance of observations, without explicitly testing hypotheses
- testable hypotheses are not interesting
- the scientific method often leads to proving a hypothesis, not testing it
These three reasons point out some major misunderstandings of the scientific method (see this post, or this post) :
- the context leading up to the hypothesis-prediction-test (the question, background information, etc.) forms an essential part of the scientific method
- the discussion of results (the information increase that leads to the next cycle) also forms an essential part of the scientific method. This is exactly what she points out "... the exciting part is a series of interrelated questions that arise and expand almost indefinitely".
- this is a general misunderstanding that Dr Peppenberg correctly identifies. But this is not a reason to dismiss the method, but more a call for better education.
We always tell our students that a full understanding of the scientific method, despite its apparent simplicity, is actually more challenging than it looks like. And now we can point them to this article, because this is a successful scientist from one of the best universities in the world (Harvard), who has a very limited idea of the scientific method.