Experiment Might Take Thousands of Volunteers and Decades of Effort
In "Too Hard for Science?" I interview scientists about ideas they would love to explore that they don't think could be investigated. For instance, they might involve machines beyond the realm of possibility, such as particle accelerators as big as the sun, or they might be completely unethical, such as lethal experiments involving people. This feature aims to look at the impossible dreams, the seemingly intractable problems in science. However, the question mark at the end of "Too Hard for Science?" suggests that nothing might be impossible.
The scientist: Christopher Chabris, assistant professor of psychology at Union College, research affiliate of the MIT Center for Collective Intelligence, and co-author of "The Invisible Gorilla: How Our Intuitions Deceive Us," out in paperback on June 7.
The idea: The concept that 10,000 hours of practice can make one an expert in a field — an idea developed by psychologist Anders Ericsson and popularized by Malcolm Gladwell in his book "Outliers" — has become prevalent enough to prompt one-time commercial photographer Dan McLaughlin to quit his job and try and become a professional golfer. But which is more important for becoming an expert — practice or talent?
"The prevailing theory in cognitive psychology, going back to Adriaan de Groot, who studied chess grandmasters, and later to Anders Ericsson, who studied other domains such as music and sports, is that expertise is all a matter of how much one practices, and that there's no such thing as a particular talent that will make it easier for someone to become an expert," Chabris says. "If that's true, that's a positive thing — there's nothing holding me back from, say, becoming a professional basketball player."
"However, a lot of people certainly find this idea of hard to believe, and if you do talk with coaches who teach chess to kids, they do think some of them have more talent, and some have less," Chabris notes. "The practice theory clashes with intuition, and while scientists don't rely on intuition but data, when intuition clashes with the data that much, perhaps more experiments are in order." Moreover, "the fact that people who are experts have practiced more than people who are novices doesn't prove that the practice, by itself, caused the expertise."
The ideal experiment to address this question would have thousands of volunteers each spend 10,000 hours practicing a randomly assigned skill to see if they indeed become experts afterward. "The results could be very, very important," he says. "The results could really impact the whole way we think about education."
The problem: Recruiting a volunteer willing to practice a skill for 10,000 hours is a challenge unto itself. Enlisting thousands in a definitive experiment that accounted for as many of the myriad differences between people that might influence whether they become experts or not, would be even harder, not to mention potentially very expensive — getting, say, 2,000 volunteers to practice a skill for 10,000 hours at $10 per hour would cost $200 million, Chabris notes.
Randomly assigning volunteers might not go over so well either. "You're volunteering 10,000 hours of your life, and imagine a situation where you're not happy with what you've been assigned — 'Congratulations, Mr. Smith, you've been selected to become a master purchasing manager,'" Chabris says.
Age is potentially a major confounding factor. "There are arguments that the younger you are, the more easily your brain soaks up skills, and I'm not sure in our society whether parents would really go along with randomly assigning kids to learn a skill," Chabris says. Other challenges would include how to judge whether a participant was an expert or not, and ensuring that the quality of teaching remained consistent for all volunteers.
The greatest concern might be in neglecting a potentially key contributing factor when analyzing whether talent or practice was more important. "You could look at 100 different elements — personality, cognitive ability, socioeconomic considerations, diet, genes and so on — but maybe there's something you miss that made all the difference," Chabris says. Given the scale of an experiment such as this, it would be hard to try again and account for anything researchers missed the first time.
The solution? "As far as I can tell, the field is at an impasse," Chabris says. "On the one hand, you have Ericsson and his associates thinking they've proved their case and getting it presented in 'Freakonomics' and other books as the gospel truth, and on the other you have people equally convinced that talent is key."
Still, there are domains where the dispute could be resolved, such as chess, "where you can measure levels of success with an objective rating system, and where there are fairly objective ways to evaluate whether a move is good or bad," Chabris says. In addition, "you can mitigate the extent to which the quality of instruction and coaching comes into play by developing manuals that teachers would follow to educate their chess students."
"I might search for schools where one could randomly assign students to a chess education program and other control conditions, and assess them extensively before and after," he says. "We might not get 10,000 hours, but we might be able to come close, for an imitation of the ideal experiment."
Image of Christopher Chabris from his Web page
If you have a scientist you would like to recommend I question, or you are a scientist with an idea you think might be too hard for science, e-mail me at email@example.com
Follow Too Hard for Science? on Twitter by keeping track of the #2hard4sci hashtag.
About the Author: Charles Q. Choi is a frequent contributor to Scientific American. His work has also appeared in The New York Times, Science, Nature, Wired, and LiveScience, among others. In his spare time he has traveled to all seven continents. Follow him on Twitter @cqchoi.
The views expressed are those of the author and are not necessarily those of Scientific American.