Christmas came early for gadget enthusiasts everywhere when news recently broke that the highly-anticipated Google Glass would soon be available to the public. These glasses, which look like they were stolen from the set of Star Trek, (think of Lavar Burton’s character in Next Generation - they look a little bit like his clunky glasses band, but without all the… well, without the glass), can display a mini computer screen so that glass wearers can call up a variety of displays such as GPS maps, the weekly weather forecast, or last night’s sports scores.
Recently, quite a bit of ink has been spilled about the potential privacy concerns associated with Glass – the glasses also allow you to instantly record any conversation – not to mention the obvious fashion concerns. (In response, Google is reportedly partnering with hipster frame makers Warby Parker.) Notably absent from this list of concerns is how Google Glass will encourage its users to multitask in ways that they probably shouldn’t.
As a multimedia and technology scholar, I’m fascinated by how our technological developments reflect our culture’s desire to perform more and more tasks at once. Communication researchers have recently found that we multitask with media not because it helps us get things done, but perhaps because we think we are getting things done. It feels good to multitask. So it’s not surprising that we would crave technologies that will help us multitask, right?
Google Glass is just the most recent addition to a long list of technologies that have been developed to help people multitask. Take Bluetooth, for example. Its technology has allowed us to drive hands-free without one hand having to fiddle with a phone. Or for a more outrageous attempt to accommodate people’s desire to multitask, check out ‘Type n Walk’, an iPhone app that allegedly lets people text while still being able to see the sidewalk and obstacles in front of them.
Google Glass takes this concept of multitasking aid to an entirely new level. The frames have the ability to insert a digital screen in our field of vision, but because that screen is confined to a small corner in this field, our eyes are still free to wander while we perform different tasks. Google designer Isabelle Olsson told Computerworld, "we created Glass so you can interact with the virtual world without distracting you from the real world. We don't want technology to get in the way."
This sentiment of not letting the technology interfere with “the real world” is, admittedly, quite enticing. But unfortunately, cognitive science suggests that such a scenario is unlikely if not impossible.
Multiple studies have actually shown that it's actually a myth that our brain can juggle two things simultaneously. In actuality, the brain is designed to only process one piece of information at a time. Cognitive capacity models of attention, memory and processing explain that our brain has a limited amount of resources it can use to deal with new pieces of information it gets to process. The more difficult a task is, the more resources the brain will need to put on the job. But the more resources we use for one job, the less we have to apply toward another. Doing two things at once stretches our brain’s capacity thin, making it so we aren’t able to perform either task without sacrificing some time or performance quality. In other words, while we can certainly try to do more than one thing at a time, reading text messages on our Google Glass screen and cooking, for example, something has got to give. Either we’ll be reading our texts at an incredibly slow pace, or that souffl we’re supposed to be watching is in big trouble.
The fact that Google Glass is a mobile, visual distraction is particularly worrisome to me. Some tasks such as walking and driving demand visual attention, and any technologies that encourage people to divert their visual focus should be a safety concern.
I also think that it is particularly important to consider how technologies like Google Glass affect our relationships with others. Sherry Turkle, a professor of the Social Studies of Science and Technology at MIT and author of the book Alone Together: Why We Expect More from Technology and Less from Each Other, has done extensive research on how our desire to be constantly connected through our technologies could be detrimental to our ability to connect to people in the flesh.
On the one hand, social media (Facebook, Twitter, and cell phones just to name a few) have brought people closer together and made it easier to maintain important connections with each other. But on the other hand, always being plugged into these technologies creates barriers to developing relationships with the people to whom we feel closest. At one time or another, many of us may have found ourselves sharing a meal with friends or family and finding that each person’s focus is on a cell phone or tablet device. Although we are technically together, sharing the same physical space, mentally we are somewhere else. Evidently, this phenomenon is so common that some restaurants have begun offering discounts to patrons who leave their cell phones at the door, a positive step, I think, in a campaign to simply make people more conscious about how they use technologies and what the consequences are for their relationships.
On that note I should add that, ultimately, I think that maintaining a critical awareness — a scientifically supported media and technology literacy, if you will -- is the key to making sure that technologies improve our quality of life instead of detracting from it. Technologies like Google Glass have the potential to enhance our lives in numerous ways (in the commercial promoting the specs, for example, a skydiver asks the glasses to record his jump, quite literally allowing him to capture what he’s seeing for future posterity). But to harness its full potential, we need to be aware of our own limitations and be vigilant to remain conscious of how our use of it affects us and those around us.
Image: Antonio Zugaldia on Flickr.