Posted on April 13, 2007
Filed Under News | Comments Off on WEB 3.0 – Remember Minority Report’s Screen Manipulation? Tomorrow’s Interfaces Are Multi-Touch
Compare: The Future Made Into Reality Today
Perceptive Pixel, Inc. was founded by Jeff Han in 2006 as a spinoff of the NYU Courant Institute of Mathematical Sciences to develop and market the most advanced multi-touch system in the world.
While touch sensing is commonplace for single points of contact, multi-touch systems enables a user to interact with a system with more than one finger at a time, allowing for the use of both hands along with chording gestures. These kinds of interactions hold tremendous potential for advances in efficiency, usability, and intuitiveness. Multi-touch systems are inherently also able to accommodate multiple users simultaneously, which is especially useful for collaborative scenarios such as interactive walls and tabletops.
Perceptive Pixel, Inc. developed this ground breaking multi-touch sensing technique that’s unprecedented in precision and scalability, and Jeff demonstrated some of their latest research on the new sorts of interaction techniques that are now possible.” Bi-manual, multi-point, and multi-user interactions on a graphical interaction surface.”
Here’s Jeff Han’s transcript at Etech, March 7, 2006
“I’ve been a Consulting research scientist at NYU’s department of Computer Science. This stuff is literally just coming out of the lab right now. You’re among the first to see it out of the lab. I think this is going to change the way we interact with computers.
This is a Rear-projected drafting table, it’s equipped with multitouch sensors. Today, ATMs, smart whiteboards, etc. can only register 1 point of contact at the time. Multitouch sensor lets you register multiple touch points, use all fingers, both hands. Multitouch itself isn’t a new concept. We Played around with multitouch in 80s, but this is very low cost, very high resolution, and very important evolution.
Technology isn’t the real exciting thing, more the interactions you can do on top of it once you’re given this precise information. For instance, you can have nice fluid simulation running. I can Induce vortice here with one hand, inject fluid with another. Device is pressure sensitive, you can use clicker instead of hand. You can invent simple gestures.
This application is neat, and we developed it in our lab. Started as screen saver, but hacked so it’s multitouch enabled. You can use both fingers to play with the lava. Take two balls, merge them, inject heat into the system, pull them apart. This obviously can’t be done with single point interaction, whether touch screen or mouse.
It does the right thing, there’s no interface. You can do exactly what you’d expect if this were a real thing. Inherently multiuser. Rael, come up and help me out. I can work in an area over here, and he can be playing with another area at the same time. It immediately enables multiple users to interact with a shared display, the interface simply disappears.
Here’s a lightbox app. We can drag these photos around. Two fingers at once, I can start zooming, rotating, all in one really seamless motion. It’s neat because it’s exactly what you expect would happen if you grabbed this virtual photo here. All very seamless and fluid.
Someone who’s new to computing culture can use this. It could be important as we introduce computers to a whole new group of people. I cringe at the $100 laptop with its WIMP interface.
It is really simple and elegant technique for detecting touch point, scattered light by deformation caused by touch on screen.
Kinaesthetic memory, is the visual memory where you left things. It provides the Ability to quickly zoom, to get a bigger work area if you run out of space, etc. it changes things. It’s more of an infinite desktop than standard fixed area.
Now, of course, you can do the same thing with videos as with photos. All 186 channels of TW cable.
Inevitably there’ll be comparisons with Minority Report. Minority Report and other gestural interfaces aren’t touch based. You can’t differentiate between slight hover and actual touch. It can be disconcerting to the user if they have action happen without tactile feedback. I argue that touch is more intuitive than gross gestural things. Also gestural is very imprecise.
Ability to zoom in and out quickly lets you find new ways to explore information. What’s interesting is that we’re excited about potential for this in information visualization applications. Can easily drill down or get bigger picture. Having a lot of fun exploring what we can do with it.
Another application we put together is mapping. This is NASA WorldWind, like Google Earth but Open Source< . We hacked it up to use the two fingered gestural interface to zoom in. You can change datasets in NASA Worldwind./strong> They also collect pseudo-color data, to make a hypertext map interface. [Demo stalls, restarts] Three dimensional information, so how do you navigate in that direction. Use three points to define an axis of tilt. Could be right or wrong interface, but example of kind of possibilities once you think outside the box.
It is a Virtual keyboard, rescalable. No reason to conform to physical devices. It brings promise of a truly dynamic user interface, possibility to minimize RSI. It’s probably not the right thing to do, to launch in and emulate things from the real world – But it creates a lot of possibilities, we’re really excited.”