Skip to main content

Citizen scientists decode meaning, memory and laughter

Citizen Science – projects which involve collaboration between professional scientists and teams of enthusiastic amateurs — is big these days.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Citizen Science – projects which involve collaboration between professional scientists and teams of enthusiastic amateurs — is big these days. It’s been great for layfolk interested in science, who can now not just read about science but participate in it. It has been great for scientists, with numerous mega-successes like Zooniverse and Foldit. Citizen Science has also been a boon for science writing, since readers can literally engage with the story.

However, the Citizen Science bonanza has not contributed to all scientific disciplines equally, with many projects in zoology and astronomy but less in physics and the science of the mind. It is maybe no surprise that there have been few Citizen Science projects in particle physics (not many people have accelerators in their back yards!), but the fact that there has been very little Citizen Science of the mind is perhaps more remarkable.

It’s not like cognitive scientists haven’t heard about the Internet. In fact, we were among the very first to use the Internet for research, posting experiments and surveys on the Web since the early 90s. At this point many hundreds if not thousands of papers have been published using data collected online. This data deluge has resulted in many discoveries. For instance, by testing thousands of people at all ages, researchers are making rapid progress in piecing together how the mind changes as we age. Testing online makes it easier for researchers to work with distant populations, allowing us, for instance, to better understand the similarities and differences between languages. Right now, I am running a studies in Korean and Russian from the comfort of my office in Cambridge.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


While Internet laboratories do represent a kind of Citizen Science — they open up the laboratory to anyone who wants to participate as a research subject — they do not provided opportunities to participate as a researcher and not just a research subject. Part of the issue is methodological: For many projects, we worry that knowing the hypothesis could affect people’s responses. Part of the issue may just be habit: We are used to seeing laypeople as possible participants, not collaborators.

However, in the last several years, several research groups – including my own – have found ways to incorporate Citizen Science into our study of the human mind.

The Meanings of Words

VerbCorner - my own project – is focused on determining the meanings of words. You might think this is a solved problem: Just look them up in a dictionary! However, the problem with dictionaries is that they define words in terms of other words, which themselves are defined in terms of other words, and so on without end. A Swahili dictionary won’t help you much if you don’t know any Swahili. At the same time, you might worry that words are ultimately undefinable because their meanings depend on context: What counts as being tall depends on whether we are talking about kindergarteners, basketball players, or tales. That is true, but it is in some sense an orthogonal point: Humans clearly know what the meanings of words are, whether or not these meanings are exact or fuzzy. It is the scientist’s job to determine what humans know and how we know it.

What has stymied research in this are is not so much whether to use Webster’s or Oxford or deciding whether meanings are exact or fuzzy – though that is certainly part of it – but the sheer scope of the problem. There are a lot of words just in English alone. So even if one had a good theory of word meaning, it would take the enterprising scientist a very long time to go through each word and characterize it according to that theory. However, the project is very doable for a large team of scientists, which is where Citizen Science comes in.

In the VerbCorner project, volunteer researchers help characterize verbs according to one mainstream theory of word meaning, popularized by the linguists Beth Levin and Ray Jackendoff and by psychologist Steven Pinker, among others. On this theory, the meanings for many words – especially verbs – can be divided into the “core” meaning and peripheral aspects of meaning. Core components for verbs include whether the verb describes an action that was done on purpose (John punched the wall) as opposed to on accident (John tripped on the log), or whether the action requires physical contact (Sally hugged Mary) as opposed to not (Sally greeted Mary). Interestingly, these core parts of meaning overlap considerably with those concepts (intention, contact, etc.) that developmental psychologists believe are among the first concepts babies understand. In contrast, there are idiosyncratic, non-core aspects of meaning, such as the differences in the physics of running vs. walking.

Our professional research team has designed intuitive tests for different components of core meaning. On the VerbCorner website, amateur scientists can help go through English verbs and determine, for each word, whether it contains that component of meaning or not. In order to make the tasks more compelling, the website is gamified, with badges, points, and fanciful backstories for each task. But the real reward is the science.

Memory for Words

Small World of Words (University of Leuven) is also focused on words, but with more of a focus on how words are represented and stored in the mind, rather than on their meanings per se. Citizen Scientists are presented with a series of words, such as climate, pitch, or bank. For each word, they are asked to think of the first three other words that come to mind. From this, scientists will be able to study the associations between words in ways that were not possible before. Previous work has tended to either rely on small studies with small numbers of words or employed thesauruses of data mined from the web such as the Google Books Ngrams project – data which is useful but at best an imperfect estimate of what we care about, which is how people actually think.

Such data have many purposes. On the technical side, researchers are interested in using these data to understand how words are stored in memory. We can also use such data to study which concepts people associate with which others: associations which can change over time or across cultures. For instance, 20 years ago, most people’s first response to climate would be weather. Now, around 40% say change. The most common association to pitch for Americans is baseball, whereas for the British, it’s football.

Similarly, these data allow us not just to suss out different meanings of the same word: bank is associated with both money and river, which are not associated with one another. This, of course, one could look up in a dictionary. What one cannot look up is which meaning is the dominant meaning. As you can see in this graph, which I made using Small World of Words’ fantastic visualization tools, the “financial institution” meaning is far more salient than the “side of a river” meaning. (The tools are good fun, but if you are planning on participating in the project, please do so *before* looking through the visualization tools too much.)

So far, the project has gotten through 7,000 words, with more coming. There is also a slightly older project in Dutch, which has worked through 12,000 words and already resulted in several publications, and projects in Spanish, Japanese, and Mandarin are in production.

The origin of laughter

Laughter has origins that predate humanity. Other great apes laugh, and some other animals may as well. The Baby Laughter Project (University of London) looks at the developmental origin of laughter: what makes babies laugh and why? At a superficial level, the causes of baby laughter are very different from adults: Babies rarely find the Colbert Report or Saturday Night Live all that funny.

The researchers behind the Baby Laughter Project are asking Citizen Scientists to help out with cataloging the various causes of baby laughter. Parents with young children can fill out a survey about their children’s laughter, and anyone who has heard a baby they know laugh can file a field report, describing when, where, and why the baby laughed. In addition, although this does not appear to have any scientific purpose, they run a video blog of laughing babies, including this one that appears to be having a very funny dream.

As Casper Addyman, one of the researchers behind the project, explains in this interview, what babies laugh at will depend in part on what they understand about the world. What do they find surprising or unusual? As such, tracking the contexts of baby laughter provides a new window into babies’ developing minds.

The future of Citizen Science of the Mind

If these projects succeed, they have the potential to have a large impact on the field, both directly and by inspiring other projects, just as has happened in Artificial Intelligence. One of the earliest such projects, launched by the MIT Media Lab’s Open Mind in 1999, used Internet volunteers to compile a massive database of commonsense facts (“a coat is used for keeping warm”, “the sun is very hot”, and “the last thing you do when you cook dinner is wash your dishes”) in order to train computers. The resulting resource — released as ConceptNet – has been used in hundreds of Artificial Intelligence projects, including everything from reasoning to interpreting humor or sentiment analysis. More recently, Lous von Ahn used games to induce volunteers to help label images, digitize text, and identify songs, produce massive datasets used in a wide variety of A.I. projects. Current A.I. projects in which people can still participate are Phrase Detectives (University of Essex) and Wordrobe (University of Groningen), both of which are focused on creating resources to help computers understand language.

All three projects profiled above have already published papers based on preliminary results (such as here, here, and here). The future looks bright for mind-related Citizen Science projects.

Images: Joshua Hartshorne

Joshua K. Hartshorne is a Ruth L. Kirschstein NRSA post-doctoral fellow in the Computational Cognitive Science group at MIT and an occasional contributor to Scientific American Mind. He conducts research both in his brick-and-mortar laboratory and online at GamesWithWords.org. You can follow him on Twitter at @jkhartshorne.

More by Joshua K. Hartshorne