ADVERTISEMENT
  About the SA Blog Network













Assignment: Impossible

Assignment: Impossible


Exploring the area between the unknown and the impossible.
Assignment: Impossible Home

A Modest Proposal: Virtual Keyboards via Leap Eyeglasses

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



In the series “A Modest Proposal,” my colleagues and I will propose inventions and projects that I think are eminently doable and would love made real.

Mobile devices are unquestionably more powerful than PCs of a generation ago. However, desktop and laptop computers still remain useful because of their peripherals, such as monitors, keyboards and mice or touchpads — their equivalents on mobile devices are far smaller and crammed together, greatly limiting their utility.

Attaching peripherals onto mobile devices typically adds bulk, thus sacrificing the virtues of having a handheld device to begin with. Ideally, one wants a peripheral that greatly increases the input and output capacities of mobile devices while taking up as little amount of space as possible.

I’ve argued that electronic glasses combined with a virtual floating keyboard would accomplish just this. The glasses would display a screen and a virtual keyboard, have sensors that could detect where your fingertips are, and wirelessly communicate data to and from your mobile device.

I first proposed the idea with the Microsoft Kinect sensor, but its resolution is not really good enough to pick up the subtle finger motions needed to make a virtual keyboard work. However, San Francisco-based Leap Motion recently debuted the Leap gesture-based computer interaction system, a USB device roughly the size of a pocket knife that can recognize the differences between finger gestures, apparently tracking movements down to 1/100th of a millimeter, far more precise than the Kinect.

I’ve also thought about the possibilities suggested by Touché by Disney Research, which can sense the way an object is held, even the human body. In principle, one can have wristbands that detect the way in which your hands and fingers are felt, which could also lead to a virtual keyboard. My problem with that idea is that you’d still want electronic glasses to see the virtual keyboard with, so why not use the Leap device?

Hopefully we’ll hear serious discussion about incorporating the Leap or other next-generation Kinect-like device with electronic glasses soon.

You can email me regarding A Modest Proposal at toohardforscience@gmail.com and follow the series on Twitter at #modestproposal.

Charles Q. Choi About the Author: Charles Q. Choi is a frequent contributor to Scientific American. His work has also appeared in The New York Times, Science, Nature, Wired, and LiveScience, among others. In his spare time, he has traveled to all seven continents. Follow on Twitter @cqchoi.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 2 Comments

Add Comment
  1. 1. flared0ne 10:31 pm 06/22/2012

    Pretty much duplicating an email I just sent — should have looked for a comment space FIRST, of course.

    You should check out the Leap forums at
    https://live.leapmotion.com/forums/forum.php
    where there are a SLEW of comments, several (mine own and others) which go into pretty reasonable detail about what and how, etc, regarding virtual keyboards.

    Either by using heads-up goggles or by hijacking the bottom 20% of your useable monitor display space. I’m particularly proud of the “carpal tunnel relief” thread discussion.

    Look for the stuff about building a recognition library of the motions YOU use while YOU type — by watching you type for awhile, and correlating what is SEEN with what the keyboard output actually IS.

    Do some cross-correlation of different keystroke sequences, separate from building up a virtual image of the keyboard, then take away the physical keyboard — and keep typing. Notice: because keyboarding gestures have a relatively long “stroke to completion” following the recognizable “gesture collapse” of “ah, you’re heading for THIS key”, the subconscious mind is going to start “finishing keystrokes earlier” and a person’s words-per-minute count could jump amusingly high.

    NOTE: One HUGE impact of the Leap device which people haven’t jumped on yet: IF you are in an environment which matches the Leap’s “recognition space” volume constraint of eight cubic feet (it can only “capture” information within a conic spherical segment constrained by angle-of-view and depth-of-view) then embed a Leap device in your goggles. And your heads-up goggles have suddenly made a technological leap to COMPLETE accuracy — well, within 0.01mm precision for position and orientation, with latency below human perceptible limits, anyway. If I understand correctly, the market for display goggles has constantly struggled because of latency and accuracy issues. No more.

    Because, as I mention somewhere among those forum posts “A Leap device in motion, which is able to maintain a continuous view of a connected surface, is able to track ITS OWN position and orientation, relative to that surface, with the same accuracy and latency as it is able to track everything ELSE around it, i.e., with 10 micrometer accuracy.”

    Caveat: the Leap device itself is just a sensor. The data extraction (which is NOT image data, by the way, but that’s a different story) requires a USB connection to a PC running the Leap API; happily, that API takes maybe 5% of the CPU time on a nothing-to-write-home-about generic PC. Wireless device connections are “in the works”, having to deal with traffic congestion, as the data stream is essentially constant, violating Bluetooth, etc protocol constraints.

    I personally am just one of many hopeful entrants in their (still open) pool of developer applicants, with thousands scheduled to be selected to receive an SDK and a free Leap device in the next three months or so. Their obvious intention is to “crowd-source” a base of useable applications by the time the device is commercially available in the first part of 2013.

    Give it a shot!

    Link to this
  2. 2. flared0ne 10:49 pm 06/22/2012

    So yeah, regarding the ideas in your post: come up with a custom-made neo-smartphone with an embedded Leap device, the ability to run the Leap API, and some type of link to display goggles, and you’ve got virtual access to pretty much any control input surface you can possibly imagine. Since the Leap is already using IR to scan its environment, simply request the version which will be able to establish IR datacomm links (with advertising kiosks and point-of-sale terminals, and other people walking around) and life will truly be complete.

    For what it’s worth — I seriously like the idea of merging the Leap device with another device that came out awhile back — remember the virtual keyboard that was visibly projected (via red LED projector) onto a flat surface in front of you, and could recognize (optically) when you “pressed” a key? Get rid of the “recognize key press” detection, keep the red LED projection capability. NOW you have a Leap device which can provide gestural-input feedback without a monitor device of any kind. It still needs the PC running the API, but THAT is going to get put into silicon soon, and THEN… the sky will be the limit, literally — until THAT cracks too.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Back To School

Back to School Sale!

12 Digital Issues + 4 Years of Archive Access just $19.99

Order Now >

X

Email this Article



This function is currently unavailable

X