Skip to main content

Pub-Style Science: exclusion, inclusion, and methodological disputes.

This is the second part of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building, wherein we discuss methodological disputes, who gets included or excluded in scientific knowledge-building, and ways the exclusion or inclusion might matter.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


This is the second part of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building, wherein we discuss methodological disputes, who gets included or excluded in scientific knowledge-building, and ways the exclusion or inclusion might matter. Also, we talk about power gradients and make the scary suggestion that "the scientific method" might be a lie…

Michael Tomasson: Rubidium, you got me started on this. I made a comment on Twitter about our aspirations to build objective knowledge and that that was what science was about, and whether there's sexism or racism or whatever other -isms around is peripheral to the holy of holies, which is the finding of objective truth. And you made … a comment.

Dr. Rubidium: I think I told you that was cute.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Michael Tomasson: Let me leverage it this way: One reason I think philosophy is important is the basics of structure, of hypothesis-driven research. The other thing I'm kind of intrigued by is part of Twitter culture and what we're doing with Pub-Style Science is to throw the doors open to people from different cultures and different backgrounds are really say, hey, we want to have science that's not just a white bread monoculture, but have it be a little more open. But does that mean that everyone can bring their own way of doing science? It sounds like Andrew might say, well, there's a lot of different ways, and maybe everyone who shows up can bring their own. Maybe one person wants a hypothesis, another doesn't. Does everybody get to do their own thing, or do we need to educate people in the one way to do science?

As I mentioned on my blog, I had never known that there was a feminist way of doing science.

Janet Stemwedel: There's actually more than one.

Dr. Isis: We're not all the same.

Janet Stemwedel: I think even the claim that there's a single, easily described scientific method is kind of a tricky one. One of the things I'm interested in -- one of the things that sucked me over from building knowledge in chemistry to trying to build knowledge in philosophy -- is, if you look at scientific practice, scientists who are nominally studying the same thing, the same phenomena, but who're doing it in different disciplines (say, the chemical physicists and the physical chemists) can be looking at the same thing, but they're using very different experimental tools and conceptual tools and methodological tools to try to describe what's going on there. There's ways in which, when you cross a disciplinary boundary -- and sometimes, when you leave your research group and go to another research group in the same department -- that what you see on the ground as the method you're using to build knowledge shifts.

In some ways, I'm inclined to say it's an empirical question whether there's a single unified scientific method, or whether we've got something more like a family resemblance kind of thing going on. There's enough overlap in the tools that we're going to call them all science, but whether we can give necessary and sufficient conditions that describe the whole thing, that's still up in the air.

Andrew Brandel: I just want to add to that point, if I can. I think that one of the major topics in social sciences of science and in the philosophy of science recently has been the point that science itself, as it's been practiced, has a history that is also built on certain kinds of power structures. So it's not even enough to say, let's bring lots of different kinds of people to the table, but we actually have to uncover the ways in which certain power structures have been built into the very way that we think about science or the way that the disciplines are arranged.

(23:10)

Michael Tomasson: You've got to expand on that. What do you mean? There's only one good -- there's good science and there's bad science. I don't understand.

Janet Stemwedel: So wait, everyone who does science like you do is doing good science, and everyone who uses different approaches, that's bad?

Michael Tomasson: Yes, exactly.

Janet Stemwedel: There's no style choices in there at all?

Michael Tomasson: That's what I'm throwing out there. I'm trying to explore that. I'm going to take poor Casey over here, we're going to stamp him, turn him into a white guy in a tie and he's going to do science the way God intended it.

Dr. Isis: This is actually a good point, though. I had a conversation with a friend recently about "Cosmos." As they look back on the show, at all the historical scientists, who, historically has done science? Up until very recently, it has been people who were sufficiently wealthy to support the lifestyle to which they would like to become accustomed, and it's very easy to sit and think and philosophize about how we do science when it's not your primary livelihood. It was sort of gentleman scientists who were of the independently wealthy variety who were interested in science and were making these observations, and now that's very much changed.

It was really interesting to me when you suggested this as a topic because recently I've become very pragmatic about doing science. I think I'm taking the "Friday" approach to science -- you know, the movie? Danielle Lee wants to remake "Friday" as a science movie. Right now, messing with my money is like messing with my emotions. I'm about writing things in a way to get them funded and writing things in a way that gets them published, and it's cute to think that we might change the game or make it better, but there's also a pragmatic side to it. It's a human endeavor, and doing things in a certain way gets certain responses from your colleagues. The thing that I see, especially watching young people on Twitter, is they try to change the game before they understand the game, and then they get smacked on the nose, and then they write is off as "science is broken". Well, you don't understand the game yet.

Janet Stemwedel: Although it's complicated, I'd say. It is a human endeavor. Forgetting it's a human endeavor is a road to nothing but pain. And you've got the knowledge-building thing going on, and that's certainly at the center of science, but you've also got the getting credit for the awesome things you've done and getting paid so you can stay in the pool and keep building knowledge, because we haven't got this utopian science island where anyone who wants to build knowledge can and all their needs are taken care of. And, you've got power gradients. So, there may well be principled arguments from the point of view of what's going to incentivize practices that will result in better knowledge and less cheating and things like that, to change the game. I'd argue that's one of the things that philosophy of science can contribute -- I've tried to contribute that as part of my day job. But the first step is, you've got to start talking about the knowledge-building as an activity that's conducted by humans rather than you put more data into the scientific method box, you turn the crank, and out comes the knowledge.

Michael Tomasson: This is horrifying. I guess what I'm concerned about is I'd hoped you'd teach the scientific method as some sort of central methodology from lab to lab. Are you saying, from the student's point of view, whatever lab you're in, you've got to figure out whatever the boss wants, and that's what science is? Is there no skeleton key or structure that we can take from lab to lab?

Dr. Rubidium: Isn't that what you're doing? You're going to instruct your people to do science the way you think it should be done? That pretty much sounds like what you just said.

Dr. Isis: That's the point of being an apprentice, right?

Michael Tomasson: I had some fantasy that there was some universal currency or universal toolset that could be taken from one lab to another. Are you saying that I'm just teaching my people how to do Tomasson science, and they're going to go over to Rubidium and be like, forget all that, and do things totally differently?

Dr. Rubidium: That might be the case.

Janet Stemwedel: Let's put out there that a unified scientific method that's accepted across scientific disciplines, and from lab to lab and all that, is an ideal. We have this notion that part of why we're engaged in science to try to build knowledge of the world is that there is a world that we share. We're trying to build objective knowledge, and why that matters is because we take it that there is a reality out there that goes deeper than how, subjectively, things seem to us.

(30:00)

Michael Tomasson: Yes!

Janet Stemwedel: So, we're looking for a way to share that world, and the pictures of the method involved in doing that, the logical connections involved in doing that, that we got from the logical empiricists and Popper and that crowd -- if you like, they're giving sort of the idealized model of how we could do that. It's analogous to the story they tell you about orbitals in intro chem. You know what happens, if you keep on going with chem, is they mess up that model. They say, it's not that simple, it's more complicated.

And that's what philosophers of science do, is we mess up that model. We say, it can't possible be that simple, because real human beings couldn't drive that and make it work as well as it does. So there must be something more complicated going on; let's figure out what it is. My impression, looking at the practice through the lens of philosophy of science, is that you find a lot of diversity in the details of the methods, you find a reasonable amount of diversity in terms of what's the right attitude to have towards our theories -- if we've got a lot of evidence in favor of our theories, are we allowed to believe our theories are probably right about the world, or just that they're better at churning out predictions than the other theories we've considered so far? We have places where you can start to look at how methodologies embraced by Western primatologists compared to Japanese primatologists -- where they differ on what's the right thing to do to get the knowledge -- you could say, it's not the case that one side is right and one side is wrong, we've located a trade-off here, where one camp is deciding one of the things you could get is more important and you can sacrifice the other, and the other camp is going the other direction on that.

It's not to say we should just give up on this project of science and building objective, reliable knowledge about the world. But how we do that is not really anything like the flowchart of the scientific method that you find in the junior high science text book. That's like staying with the intro chem picture of the orbitals and saying, that's all I need to know.

(32:20)

Dr. Isis: I sort of was having a little frightened moment where, as I was listening to you talk, Michael, I was having this "I don't think that word means what you think it means" reaction. And I realize that you're a physician and not a real scientist, but "the scientific method" is actually a narrow construct of generating a hypothesis, generating methods to test the hypothesis, generating results, and then either rejecting or failing to reject your hypothesis. This idea of going to people's labs and learning to do science is completely tangential from the scientific method. I think we can all agree that, for most of us at are core, the scientific method is different from the culture. Now, whether I go to Tomasson's lab and learn to label my reagents with the wrong labels because they're a trifling, scandalous bunch who will mess up your experiment, and then I go to Rubidium's lab and we all go marathon training at 3 o'clock in the afternoon, that's the culture of science, that's not the scientific method.

(34:05)

Janet Stemwedel: Maybe what we mean by the scientific method is either more nebulous or more complicated, and that's where the disagreements come from.

If I can turn back to the example of the Japanese primatologists and the primatologists from the U.S. [1]… You're trying to study monkeys. You want to see how they're behaving, you want to tell some sort of story, you probably are driven by some sort of hypotheses. As it turns out, the Western primatologists are starting with the hypothesis that basically you start at the level of the individual monkey, that this is a biological machine, and you figure out how that works, and how they interact with each other if you put them in a group. The Japanese primatologists are starting out with the assumption that you look at the level of social groups to understand what's going on.

(35:20)

And there's this huge methodological disagreement that they had when they started actually paying attention to each other: is it OK to leave food in the clearing to draw the monkeys to where you can see them more closely?

The Western primatologists said, hell no, that interferes with the system you're trying to study. You want to know what the monkeys would be like in nature, without you there. So, leaving food out there for them, "provisioning" them, is a bad call.

The Japanese primatologists (who are, by the way, studying monkeys that live in the islands that are part of Japan, monkeys that are well aware of the existence of humans because they're bumping up against them all the time) say, you know what, if we get them closer to where we are, if we draw them into the clearings, we can see more subtle behaviors, we can actually get more information.

So here, there's a methodological trade-off. Is it important to you to get more detailed observations, or to get observations that are untainted by human interference? 'Cause you can't get both. They're both using the scientific method, but they're making different choices about the kind of knowledge they're building with that scientific method. Yet, on the surface of things, these primatologists were sort of looking at each other like, "Those guys don't know how to do science! What the hell?"

(36:40)

Andrew Brandel: The other thing I wanted to mention to this point and, I think, to Tomasson's question also, is that there are lots of anthropologists embedded with laboratory scientists all over the world, doing research into specifically what kinds of differences, both in the ways that they're organized and in the ways that arguments get levied, what counts as "true" or "false," what counts as a hypothesis, how that gets determined within these different contexts. There are broad fields of social sciences doing exactly this.

Dr. Rubidium: I think this gets to the issue: Tomasson, what are you calling the scientific method? Versus, can you really at some point separate out the idea that science is a thing -- like Janet was saying, it's a machine, you put the stuff in, give it a spin, and get the stuff out -- can you really separate something called "the scientific method" from the people who do it?

I've taught general chemistry, and one of the first things we do is to define science, which is always exciting. It's like trying to define art.

Michael Tomasson: So what do you come up with? What is science?

Dr. Rubidium: It's a body of knowledge and a process -- it's two different things, when people say science. We always tell students, it's a body of knowledge but it's also a process, a thing you can do. I'm not saying it's [the only] good answer, but it's the answer we give students in class.

Then, of course, the idea is, what's the scientific method? And everyone's got some sort of a figure. In the gen chem book, in chapter 1, it's always going to be in there. And it makes it seem like we've all agreed at some point, maybe taken a vote, I don't know, that this is what we do.

Janet Stemwedel: And you get the laminated card with the steps on it when you get your lab coat.

Dr. Rubidium: And there's the flowchart, usually laid out like a circle.

Michael Tomasson: Exactly!

Dr. Rubidium: It's awesome! But that's what we tell people. It's kind of like the lie we tell the about orbitals, like Janet was saying, in the beginning of gen chem. But then, this is how sausages are really made. And yes, we have this method, and these are the steps we say are involved with it, but are we talking about that, which is what you learn in high school or junior high or science camp or whatever, or are you actually talking about how you run your research group? Which one are you talking about?

(39:30)

Janet Stemwedel: It can get more complicated than that. There's also this question of: is the scientific method -- whatever the heck we do to build reliable knowledge about the world using science -- is that the kind of thing you could do solo, or is it necessarily a process that involves interaction with other people? So, maybe we don't need to be up at night worrying about whether individual scientists fail to instantiate this idealized scientific method as long as the whole community collectively shakes out as instantiating it.

Michael Tomasson: Hmmm.

Casey: Isn't this part of what a lot of scientists are doing, that it shakes out some of the human problems that come with it? It's a messy process and you have a globe full of people performing experiments, doing research. That should, to some extent, push out some noise. We have made advances. Science works to some degree.

Janet Stemwedel: It mostly keeps the plane up in the air when it's supposed to be in the air, and the water from being poisoned when it's not supposed to be poisoned. The science does a pretty good job building the knowledge. I can't always explain why it's so good at that, but I believe that it does. And I think you're right, there's something -- certainly in peer review, there's this assumption that why we play with others here is that they help us catch the thing we're missing, they help us to make sure the experiments really are reproducible, to make sure that we're not smuggling in unconscious assumptions, whatever. I would argue, following on something Tomasson wrote in his blog post, that this is a good epistemic reason for some of the stuff that scientists rail on about on Twitter, about how we should try to get rid of sexism and racism and ableism and other kinds of -isms in the practice of science. It's not just because scientists shouldn't be jerks to people who could be helping them build the knowledge. It's that, if you've got a more diverse community of people building the knowledge, you up the chances that you're going to locate the unconscious biases that are sneaking in to the story we tell about what the world is like.

When the transcript continues, we do some more musing about methodology, the frailties of individual humans when it comes to being objective, and epistemic violence.

_______

[1] This discussion based on my reading of Pamela J. Asquith, "Japanese science and western hegemonies: primatology and the limits set to questions." Naked science: Anthropological inquiry into boundaries, power, and knowledge (1996): 239-258.

* * * * *

Part 1 of the transcript.

Archived video of this Pub-Style Science episode.

Storify'd version of the simultaneous Twitter conversation.