UI&us is about User Interface Design, User Experience design and the cognitive psychology behind design in general. It's written by Keith Lang, co-founder of Skitch; now a part of Evernote.  His views and opinions are his own and do not represent in any way the views or opinions of any company. 

External Articles

« Johnny Chung Lee and Project Natal | Main | Will This Link Open a New Tab/Window? »

Project Natal on the Jimmy Fallon show

My previous post on Microsoft's 3D system gaming system 'Natal' questioned the 3D system's latency. In this fresh real-world demo on the Jimmy Fallon TV show, Kudo specifically aimed to demonstrate the 'low latency' of the system. I'm not sure if the red jumpsuits were related to demoing the system, but the system seemed very robust, and responsive enough for fun gaming. I think Microsoft has a winner here.

UPDATE: Via Engadget Apparently the red "suits were just for fun"

UPDATE 2: Video got pulled down, sadly. I'm trying to find a replacement to embed.

EmailEmail Article to Friend

Reader Comments (5)

There seems to be a huge lag between their movements and the on-screen reaction. In the first game, the ball had usually already passed the player when the on-screen character made any kind of move. The same seems to be true for Burnout; Kudo Tsunoda claims that the system is responsive, but in every demo I've seen so far, there has been huge lag.

Hopefully, they'll fix it for the release. It doesn't look like they'll release it before the end of 2010, so I hope they have plenty of time to get it right.

The bigger picture, I think, is that this system is not very well suited for gaming. Steering a car by holding your hands out is a bad idea; you won't be able to do that for more than a few minutes at a time. Most of the other games they have shown seem somewhat uninspired. There's only so many interesting things you can do by jumping around in front of your TV, as earlier attempts at this (Playstation Eye, for example) have shown. Pretty much all games with any kind of depth require buttons, precise dual analog input, and/or some kind of pointer.

I do, however, think that this would be a great input system for devices which require less interaction. For example, it could be tied into your calendar; since it recognizes you, such a system could automatically show you your appointments when you enter the living room, without any active interaction from you.

Similarly, watching TV or DVDs requires only a small amount of interaction. I could see such a system replacing traditional remotes. Wave up to go to the next channel, down to go the the previous, left for less volume and right for more volume.

June 12, 2009 | Unregistered CommenterLKM


I agree — there is some latency in the system, which I'm not sure they'll be able to reduce by release But the robustness of the system is what is impressing me most at this stage.

As for TV controlled by gestures, http://www.engadget.com/2009/06/02/canesta-gesture-controlled-tv-frees-us-from-the-tyranny-of-the-r/" rel="nofollow">this TV is using the same 3D camera technology, and works similiar to how you describe. The future is here!

June 12, 2009 | Unregistered CommenterKeith Lang

The thing that struck me is that while the system can see where the player is in 3D space, there are (at least in this demo) relatively few cues to the player where the objects that they are interacting with are. As you can see in the clip, the players had a hard to hitting the ball and even the XBOX guy was just flailing around while actually playing.

Perhaps it gets easier with practice, but I suspect that will be an ongoing issue.

I see the same type of problem with some Wii interactions.

June 13, 2009 | Unregistered CommenterTodd

@Todd — Agreed — in this demo people had difficulty understanding where their avatar was in the Z axis. I don't see an easy solution without a real 3D screen.

I recently had the good fortune to play with a Phantom 3D haptic device, in conjunction with a hi-grade 3D screen overlaid perfectly on the space where my hand was holding the haptics device. Even in this case, with a research-grade (expensive!) setup, the Z-axis was difficult to get a sense for.

Perhaps in the future, 3D screens will have the ability to change the focal depth of each pixel, giving much truer 3D visuals.

June 13, 2009 | Unregistered CommenterKeith Lang
June 15, 2009 | Unregistered CommenterVahan
Editor Permission Required
Sorry — had to remove comments due to spam.