Skip to main content

A Coursera Course on Visual Perception—Starts January 7th.

There's a new 8-week course available on visual perception taught by Dale Purves of Duke University. It's available for free and starts on January 7th, 2015.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Courtesy of R. Beau Lotto (University College London) and Dale Purves (Duke University)

For those of you who don’t know what Coursera is, it’s one of several apps/websites that provides courses online. It’s an amazing system allowing for thousands of students to participate and view the same lectures. Such courses are generically referred to as MOOCs: Massive Open Online Courses. Coursera and its competitors, such as edX could potentially change the educational landscape by bringing the highest-quality education and lecturers to the general public, anywhere in the world, cheaply or even for free. No longer will aspiring students have to compete for an entire childhood before achieving entry into the world’s best universities to see these lectures: they can simply login to view the same lectures that are offered to the intelligentsia.

There’s a new 8-week course available on visual perception taught by Dale Purves of Duke University. It’s available for free and starts on January 7th, 2015. Purves’s approach to visual perception is exciting because it’s a bit different than the usual approach. Sensation and perception courses usually try to explain perception in terms of reconstructing the physical world. That is, the world exists, it has properties that can be measured with a visual system, and those measurements are then used to reconstruct a representation of the world in the brain based on those measurements. Visual illusions—where the perception doesn’t match the reality—in this model are errors in measurement: where the visual system gets it wrong. Sounds great, right? The problem is that our perception is not an accurate representation of the world (as Purves’s course will show), even when it could be based on the quality of the sensation. That is, our visual systems sometimes perceive illusions even when its measurements are accurate.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Purves considers that the visual system is instead working to solve an inverse problem… it’s trying to build a model of the world that will help the observer survive and reproduce (rather than to reconstruct the physical world accurately). What this means is that we can continue to work within the world (or, our model of the world) even in the absence of direct measurement. For example, to perceive the lightness of an object, the standard view of vision—as a reconstructive process—would be that the photoreceptors of the eye count photons that arrive from the object and report them so that we can reconstruct the object we’re viewing. That’s great except that—as Purves’s and his colleagues’ own lab work have perhaps shown best—lightness perception conflates the reflectance of the object (what color its surface is painted and how well it reflects photons that emanate from the light source), with the illumination of the object (how much light actually arrives from the light source), and transmittance of the object (how much light is either generated by the object directly… or travels through a through a transparent object from behind that object). All the visual system knows is the result of all of these object properties. But the object’s appearance nevertheless depends critically on knowing the contributions of all of these separate sources of photons. So what’s a brain to do? Purves’s view is that the visual system must guess at what the world looks like based on fitting its data to an internal, already-formed model of the world. Where does the model come from? From past empirical experience with the world. By experiencing and learning about objects throughout your life you adjust your model of the world to account for the frequency by which a given pattern of photoreceptor responses correlates to a given object. In this sense, genetically transmitted knowledge about the model also contribute to one’s empirical knowledge. So much of our model may be hardwired into our brains at birth, and your life tweaks your model as you go.

Many of Purves’s insights in visual science have correctly challenged the status quo and he is one of the finest phenomenologists in the world (a phenomenologist is a scientist who develops visual illusions for the purpose of drawing insight into visual processing in the brain). The image presented here is a terrific example. Notice that the orange and brown chips on the Rubik’s cube appear to be different colors that are reflecting different amounts of light (the orange chip is in the shade). Actually: the orange and brown chips are exactly the same color but are interpreted by your brain differently because they appear to have different levels of illumination. Don’t believe me? Print this image out on a printer, and cut the orange and brown chips out with scissors and compare them directly: they are exactly the same and only appear differently here due to their context.

So join me as a student in this course in January! It’s certain to be illuminating.

 

Stephen L. Macknik is a professor of opthalmology, neurology, and physiology and pharmacology at SUNY Downstate Medical Center in Brooklyn, N.Y. Along with Susana Martinez-Conde and Sandra Blakeslee, he is author of the Prisma Prize-winning Sleights of Mind. Their forthcoming book, Champions of Illusion, will be published by Scientific American/Farrar, Straus and Giroux.

More by Stephen L. Macknik