UI&us is about User Interface Design, User Experience design and the cognitive psychology behind design in general. It's written by Keith Lang, co-founder of Skitch; now a part of Evernote.  His views and opinions are his own and do not represent in any way the views or opinions of any company. 

External Articles


Almost Touching the Tablet

UPDATE: I got it wrong. But I think the trend is right. 


Apple is rumoured to be announcing a new Tablet device. You probably know this. Rumours of it being shiny and thin (which it probably will be.) How it will be always connected to the internet, and show you books and newspapers and movies on-demand (which it probably can.) How it will have some magical new jaw-dropping interface (which it probably will have.)

But what excites me most is a possible feature that no one seems to have thought of. It's not sexy, and it's something we use everyday on our desktop machines. In fact you probably can't remember computing without it. And yet, I feel it's the key to the future of computing, and without it, the Tablet will not be able to spawn the New Age of Computing. So what's this amazing technology? I'll tell you: mouseOver. You know, the feature whereby links on a page change when you mouse over them, buttons darken and tooltips appear. The subtle interaction that lets you learn more about an interface without committing to anything as serious as a mouse click.

Of course, the Tablet is all about Multitouch -insert choirs of angels- so there's no mouse to be seen. Just a finger or three. So let's call it 'touchOver'. Imagine icons that darken, lighten and pop-out as you waver your finger over them like a tantalising box of fancy chocolates. 

So, why bother to include an interaction feature from the past?

First, let's look at the existing benefits of mouseOver in desktop and web applications:

  1. Users feel more comfortable with unfamiliar interfaces, exploring without the commitment of clicking
  2. The user has feedback which helps them "aim" their cursor

Both of these are valuable. But in multi-touch interface history I see rare mention of support for the touchscreen equivalent of mouseOver. I don't know why—maybe it has been technically difficult to cleanly detect fingertip position as they hover over a touch-surface. Maybe the interaction design was never solved. Maybe I've been looking in the wrong places. Maybe it wasn't deemed necessary.

But. Fast forward to now — see a recently awarded patent to Apple…

[0095]Another potential technique for the determination between "hovering" and "touching" is to temporally model the "shadow" region (e.g., light impeded region of the display). In one embodiment, when the user is typically touching the display then the end of the shadow will typically remain stationary for a period of time, which may be used as a basis, at least in part, of "touching". In another embodiment, the shadow will typically enlarge as the pointing device approaches the display and shrinks as the pointing device recedes from the display, where the general time between enlarging and receding may be used as a basis, at least in part, of "touching"…

…where it seems that Apple now has the technology, the art and the desire to achieve touchOver. Their patent in essence describes an artificially drawn 'shadow' of each fingertip as it hovers over the interface. Here's a very quick mockup I made of how this may look, as applied to iPhone.

touchOver mockup from Keith Lang on Vimeo

So why does touchOver matter so much?

First, I think this will make the touchscreen user experience even better. Less mis-tapped buttons because you have a greater sense of where the device 'thinks' your finger is. More accurate detection of taps because the device knows about your finger position even before you tap. 

Secondly, and more importantly, it serves as a stepping stone to a multitouch proxy device. 

What do I mean by 'proxy device'? Take the mouse for example. You can see a physical 'mirror' of the mouse on the screen at all times — the cursor —that lets you interact without looking at the physical device.

For a multitouch tablet to replace, or at least augment the mechanical keyboard and mouse, there should be a way to let you keep your eyes on the screen at all times. I know of at least one device that works in this way, the Tactapad by Tactiva (never released commercially).

You can watch a movie of the Tactapad in action here. The tactapad uses a video camera looking down on the users hands to generate an artificial silhouette. A sufficiently advanced multitouch trackpad could generate an even more minimalist/clean version. Note: I'm not saying Apple would mimic the tool workflow as per Tactapad, simply that they'd share the idea of proxy manipulation.

The end result is the same.

A device that brings all the benefits of a dynamic multitouch interface to the desktop computing experience.


"But touchscreens are so finicky!"

Lay your palm down on many touchscreens and it will register that incorrectly as a touch event. Other Apple patents describe logic to rule this out. In addition they boast the ability to switch between 'typing' mode (all fingers down) 'pointing mode' (one finger down) and drawing mode (three fingers down, like holding an imaginary pencil.) It may be a solvable problem.

"I'd get tired holding my fingers up all day"

Yes, you wouldn't want to hold your fingers 1cm above the desk all day long. I'm sure there is some solution. See above.

"But what about haptics/force feedback?"

Yes, haptics/force feedback may help you 'feel' your way around an interface without looking. I've been lucky enough to play with some lab-quality (read: $$$) haptic interfaces and agree that it's completely possibly to emulate the feel of pressing a phsyical button or pushing around a lump of clay. But those devices were not cheap, not light nor low-power. I'm looking forward to sophisticated haptics in out everyday devices as much as you, but in some years' time.

"I'd never give up programming on my trusty IBM mechanical clunkity-clunk keyboard."

Maybe writers and programmers will stick to using mechanical keyboards forever. Maybe we'll always keep a mechanical keyboard handy. But it will get harder to resist the appeal of a device where everything is under your fingertips… imagine, for example a Swype-like input interface that dynamically changes it's dictionary depending on what application, or even what part of a line of code, you're currently typing in. A truly context-aware device, done in an subtle and sensible way.

"Why hasn't someone done it before?"

Hehe. They said that to the Wright brothers too. Actually, I'd love to mock this up using something like Keymote for iPhone, but it's very difficult without touchOver-like functionality

And yes, Apple predictions are folly. But from my perspective it's simply a question of 'when' and 'by who'. And from my perspective, the answers are 'soon' and 'Apple'. 

Past. Present. Future.

Here's the bit where I'd love your help: Have you seen any examples of touchscreen interfaces working with touchOver like capacity? How did they work? What other problems do you envision?

Is touchOver essential to a rich desktop multitouch experience? I love the fluidity of interfaces like this multi-touch puppetry (via by Bill Buxton) and think touchOver will be essential to move rich interaction like this to mainstream computing. Let me know. :)




Green Hosting

I recently read No Impact Man, a story of one guy who decides to trial a year of living without impact. And, as you may know 

Electricity generation using carbon based fuels is responsible for a large fraction of carbon dioxide (CO2) emissions worldwide; and for 41% of U.S. man-made carbon dioxide emissions. Source: wikipedia

so I was very pleased to note that the host of UI&us, Squarespace, has an extensive Green statement. Overall a very environmentally responsible company, based on this. I'd love to know what you guys think of their claims.


More Natal Updates from CES 2010

I've been following Microsoft's Project Natal for a while now. Here's a video update — note the impressive 3D images coming in from the camera at 1:14.


Change Blindness in Advertising

I'd not seen this fitting example of change blindness before:

I didn't see a single thing. More on change blindness.

From thread comments in Boing Boing


Amazing Avatar

A few days ago I saw Avatar in 3D. Wow. The technical and artistic use of 3D in this film is phenomenal. And the result is the most comfortable 3D movie viewing experience I've ever had.


Screensavers Don't Save Anyone

Three events in my life collided recently. One: For some time I had been thinking about making a holiday season screensaver to give away to you, my beloved readers. Two: I recently finished listening to the audiobook No Impact Man and have been quite moved by it. Three: I recently saw a documentary focussing on Europe's plans for reducing carbon emissions. In this documentary there featured a Greenpeace office worker discussing the urgent need for reduced carbon emissions. Behind him was his desktop computer, displaying a screensaver.

How do these things relate? Glad you asked. I've been thinking a lot about the environmental impact of my industry — software design — and feeling pretty good about it's low environmental impact when offered as download-only. I don't feel so good about the polycarbonate discs that are still a major way of distributing software. Discs that will outlast your children's children, but will be practically useless beyond just a few years worth of OS updates.

I feel even worse about the actual computing machines they run on. Consumer electronics surely must be the most toxic mainstream industry there is. What other industry takes a combination of very rare elements like gold and coltan, combines them with some very poisonous ones like BFRs and mercury, and then places them in tiny formations with hundreds of other substances in a single, practically impossible-to-recycle unit? And to top it off, makes that unit go from highly desirable, to not worth giving away, within a decade.

Once the computer is made, much of it's life will be spent idling. Checking back a million times a second to see if the human on the other end has pressed a key. And heck, maybe that human walked out to lunch anyway. The computer doesn't really know*. Worse, sometimes a computer will turn on a "screensaver" and show that instead of sleeping the display. Using electricity when doing nothing useful.

Screens don't need saving. We do. If a computer is not being used, it should be off, or asleep. And if a screen is not being used, it should be off, or asleep. So dear reader, that's why Santa is not bringing you a UI&us screensaver this Christmas. I hope you don't mind. 


* I've been looking into repurposing an older computer for my mum, as a way to keep her connected. Big fonts, and some simple system to display the latest photos and tweets on the screen with no interaction required. One idea has been to make the computer automatically detect her motion when she's next to it, and wake up. That way she can see the latest news with no interaction at all. It doesn't seem too hard. I really don't understand why computers don't do it already.


3D Scanning on the Cheap

Very impressive 3D model creation from a physical 3d object held and rotated in front of a webcam. Magic software by Qi Pan of Cambridge University.  Via Wired.


Rethinking the Inspector

Smart Inspector — a mockup of a dynamic Inspector Palette from Keith Lang on Vimeo.

 Floating inspector palettes have been around a long time, and for good reason. But I think we may now have the design tools to improve how they work. Have a watch of my short video and let me know what you think*:

The video demonstrates a current problem with floating inspector palettes: window management. At 2:00 in I show an interactive, animated mockup that demonstrates some tweaks to the Inspector window mechanism that may let us focus more on the content. 



  • Although not demo'd, I believe this system could work well for multiple selection, using all the pieces that are there already
  • Why animate (slide) between inspector positions? I believe this, if done right, provides a less-obtrusive experience then simply having the window appear out of nowhere. 
  • Yes, this idea is not a totally new one, for example iScrapbook has Smart Inspectors, but they are implemented in a different way
  • You can find Matt Gemmell's Attatched Window subclass here




* Apologies to the hearing impaired, I'm looking into captioning solutions. You should still get the gist.



I'm a PC. And I read UI&us

At the risk of getting meta…here's some data about your fellow UI&us readers. Now you know who's peering over your shoulder while you read. And to be honest,  I expected a clear majority to be OS X, but this is not the case, with only 46% (known).

I hope that means I'm writing general interesting and useful articles, and please let me know if I'm not. And hey — a big 'hello' to the one reader reading this blog on a Wii. :-) 



What are the Big Problems in Computing?

In 1906 an Italian economist called Vilfredo Pareto noticed that 80% of the land was owned by 20% of people. Thirty five years later, a Quality-Management focussed consultant called Joseph Juran re-discovered the work of Pareto and named a principle the general principle after Pareto. This principle is also known as the 80-20 rule.

The rule states that there is often a non-linear relationship between resources and their control. For example, the richest 20% of the worlds population enjoy a disproportionate 80% of the world's income. The principle has been applied across many situations…


Click to read more ...