Mapping the Future with Tools From the Past
I use Adobe Creative Suite almost every day, but I am increasingly frustrated with how I’m required to interact with it. Most of these tools were designed 20 years ago… and it shows. They come from an era when the keyboard and mouse were the primary means of interacting with a computer. Most professional creative tools do very little to take advantage of the myriad of other input devices on the market today: digital pens, track pads, voice, cameras, phones, tablets, and depth sensing cameras. Alias and Autodesk Sketchbook Pro are great alternatives to Adobe Photoshop because they were designed specifically for a pen. But on the Mac platform, my pen, track pad, microphone, and camera are horribly underutilized.
Additionally, I can efficiently switch modes using the keyboard, but these commands have no relationship to the on screen menus and are difficult to learn. And once I’ve learned a keyboard command, the on-screen menu becomes unnecessary. So why does something so unnecessary eat up all of my screen real estate?
A Leap Forward
After hearing about the Leap Motion input device, I imagined ways it could address my complaints. Fellow Artefact designer Markus Wierzoch and I worked together to discover how we might apply a Leap-like gesture system to graphics applications. We decided to see what Leap could do with parametric CAD application Pro Engineer. We picked a parametric modeling application for two reasons.
- I want to test the limits of the Leap’s accuracy,
- The capabilities of Pro Engineer are far beyond what you can do with physical models in the real world. When I was at Microsoft, we called this “authentically digital” – one of eight principals that went into our design.
Because we didn’t have the Leap device, I created a Wizard of Oz prototype envisioning Pro Engineer with Leap and voice input. This is a prototyping technique where one person pretends to use a realistic interface while a second person behind the scenes keeps the user interface in sync with the user input. I created a Flash movie on my Mac and tapped a trackpad on the ground with my foot to advance the movie. The following videos show two different designs we made using this technique.
The principles I applied to the Leap interface are consistent with the principles I’ve learned over the years.
- All inputs are good at something but terrible at something else
- Our left and right hands are different, so use them differently
- Smooth path from novice to expert usage
- Design for flow
- Map frequent actions to highly ergonomic interaction. Map infrequent actions to uncomfortable interactions
- Instant feedback and ‘feed-forward’ assist learning
- Authentically digital
- Separation of activation and manipulation
1. All inputs are good at something but terrible at something else
Bill Buxton coined this phrase. When setting up a new interactive system, it’s important to think about what functions map to different user inputs. The wrong choice of mapping will make your life worse over extended usage. The other day I was watching the scene in Blade Runner when Harrison Ford is navigating the photo he found using his voice. I think he said, “Zoom in to quadrant 542, pan, stop,” twenty times. A touch screen could have gotten the job done in a few pinch gestures. On the other hand, on my iPhone it is extremely tedious to send a text message to one of my thousands of contacts with touch, but extremely easy with voice. Thank you, Siri.
For our Pro Engineer/Leap prototype, we mapped voice input to mode switching and parameter editing. This does away with the endless menus and keyboard entries and frees up our hands for 3D manipulation. While hidden menus are great on the eyes, it puts a heavier burden on our memory. To train our muscle and auditory memory, a series of learn-as-you-go or self-revealing gestures teaches the user.
We mapped Leap input to 3D manipulation. Using the Leap, you can get a full six degrees of freedom for each hand. This means it can track an object or hand in 3D space no matter how you move or rotate it. So rather than trying to do everything with in-air gestures, we just do 3D manipulation and selection.
2. Our left and right hands are different, so use them differently
Think about how you use your non-dominant hand. You might assume you use your dominant hand far more than your non-dominant hand. I write, throw, and draw with my right. But when I actually watch myself using my hands my left hand is just as busy as my dominant right. When I draw, my left hand constantly spins, moves, and holds the paper down. While sculpting, my left rotates the material while my right sculpts. Even whittling wood uses just as much left hand as right. Applying these observations to designing interactive systems, it’s clear that our non-dominant hands shouldn’t be ignored when mapping actions – especially those requiring less dexterity.
Applying this principle to our Pro Engineer/Leap prototype, one hand sets the context and the other manipulates and selects.
3. Smooth path from novice to expert usage
When I worked on the Microsoft Surface team, Daniel Wigdor taught me about something he called the “trough of incompetence.” He observed that it took a long time for someone proficient with drop-down menus to become proficient with the corresponding keyboard shortcuts. Most people don’t get to expert usage because the keyboard commands are a different physical action than the drop-down menus and there isn’t a smooth path to go from novice to expert usage. As he and Bill Buxton have taught, the physical actions of the novice and the expert should be identical. The expert user should simply be faster.
The marking menus in my favorite creative tool (Buxton’s Alias Sketchbook) do just that. A novice user is shown the menu after a short delay as he or she taps or holds the pen on the menu. The user then drags through the menus to activate the action he or she wants. The next time, menu use is faster because it’s easier to remember where the action is located. Eventually they don’t need to consciously think about the menu at all as muscle memory takes over.
4. Design for flow
A goal in designing any creative tool is to get the user into “flow.” This is a state where time disappears and productivity skyrockets. There are a few things I have found that get people into flow or keep them from it.
- Waiting: every time you have to wait for a press-and-hold gesture, save dialog, or application load you lose your flow. It is imperative that we don’t introduce any artificial lag into our gesture systems. We also want to make sure that we give instant feedback otherwise flow is interrupted. The only action that should be mapped to time is the “learn more” gesture. If you press and hold, you can get feed-forward to what your gesture would do.
- When the Surface team designed the touch system for Windows 8, we made sure not to impose wait times for any gesture. This is the reason Windows 8 doesn’t use press-and-hold to reorder its tiles on its start screen. If you drag up and down on tiles you can instantly reorder the tiles. We applied this same principle for our Pro Engineer/Leap experiment. When you hold the voice input button for a while, the help menu appears to teach you what kind of things you can say.
- Distractions, notifications, and interruptions are the enemies of flow. It takes a while to get into flow and any interruptions you build into the system will disrupt your users. Don’t ask them if they want to rate your app. Don’t stick ads in your app to offset the user’s cost. If you can help it, don’t ever show notifications at all. You might interrupt use of another application.
- Take advantage of muscle memory to speed things along. The marking menus in Sketchbook Pro are a great example of muscle memory. Buxton intentionally limited the menus to eight commands per menu, because it’s proven that people’s muscle memory can remember the eight cardinal directions better than any other directions. For Leap, I hope people will use muscle memory to remember what the different fingers are mapped to. This is the same way we remember how to touch type and use complex keyboard modifiers.
5. Map frequent actions to highly ergonomic interaction. Map infrequent actions to uncomfortable interactions
Just as all inputs are good at something and terrible at something else; some gestures are more ergonomic than others. It is the responsibility of the interaction designer to reduce unnecessary pain for the user. For example, it would be irresponsible for a designer to force a user to hold his or her arm extended with a raised shoulder for eight hours a day (I’m looking at you, Kinect). This leads to a real condition that early touchscreen HCI pioneers called gorilla arm – to describe an inability to raise your fatigued arms. A poorly designed Leap interface can contribute to a gorilla arm problem, so it is important to map frequent actions to ergonomic user movements. Uncomfortable user movements should be used very infrequently. If Leap’s precision is as good as the company suggests, it won’t be necessary to raise your shoulders to use the system.
The Windows 8 web browser I worked on offers a good example of this principle. The three most frequent actions users perform are Activate Link, Pan, and Back. I wanted people to be able to do all three of these actions anywhere on the screen. I didn’t want to ask people to target a small back button, so we made swiping to the right anywhere on the web page serve that function. This allowed for minimal arm movement. Hopefully it didn’t lead to confusing the pan gestures.
6. Instant feedback and feed-forward assist learning
Lag and latency mark the death of a good user experience and can lead to unintended consequences. There is a classic psychological experiment that explains how delay in a system leads to superstitious behavior. The experiment involved putting pigeons in a room with a button. When the pigeons peck the button, food comes out of a hole in the wall. The twist: there was a two second delay between the button press and the food pellet release. During this wait time the pigeons would perform a random behavior. Then the food pellet would appear. The next time they pressed the button they would repeat the same random behavior, as if it was linked to their reward. The scientists found that most of the pigeons developed a kind of superstitious behavior. Some would spin clockwise, counterclockwise, or similar behavior. Pigeons may not be people, but we observed similar behavior with people interacting with Microsoft Surface.
We were working on a new gesture language for Surface but had not worked out the kinks in the feedback of the system. I saw a father and his daughter display superstitious behavior as they tried to zoom into a document. The dad would hold his two hands in an uncomfortable position pinning the document down. His daughter would use two of her fingers to zoom into the pinned document. Not quite what we were going for, but very useful to learn from design gone wrong.
If possible, get down to 10 milliseconds of lag and a 120hz refresh rate. At this level of latency you can trick the brain into thinking that the virtual objects you manipulate are real and with the right feedback you can minimize superstitious behavior.
7. Authentically digital
There is a myth that the natural user interface should be based on physical metaphors so that people know how to use them: wood grains, beveled edges, leather, page flips, and drop shadows. These are standard in many touch products. But as we have seen with most new entries into the interactive space, these physical metaphors aren’t necessary. In fact, they hinder us and tie us down to only what is possible in the real world. We set out to make a 3D sculpting machine. We made a parametric modeling app that can do things to 3D objects you would be unable to in the real world. You can change the past on the fly and make alternate futures. You can duplicate an object a thousand times. You can design by making relationships between objects rather than drawing and erasing. Why bind the computer to the physical constraints of clay? Buy clay and a 3D scanner if you want to sculpt.
8. Separation of activation and manipulation
Try to pick up a pencil with one finger. Once you’ve done that, try to draw a picture without lifting the tip from the paper. Next, take that paper into a room with three people and try to get a specific person to pick up the paper without looking at them or saying their name. These are all examples of systems that don’t have a separation of activation and manipulation. To pick up a pencil you need to be able to separately move your hand and pinch your fingers. To draw you need to be able to lift your pen to separate your strokes. For social interaction you need to address a specific person to have a conversation.
In our design of Pro Engineer for Leap, we have mapped hand movement to manipulation and finger pinch to activation. Like grasping a pencil and then picking it up, this mapping allows for targeting a line segment, vertex, or object in 3D space, and then manipulating it when the fingers are pinched together. The danger of any system is that activation and manipulation conflict with each other. The mouse is so successful as a pointing device because clicking the mouse button does not move the cursor position. The more activation effects manipulation in a system, the less accurate that system will be. Touch and pen input systems suffer from this phenomenon and therefore are not as accurate as the mouse. The gain of mouse movement to cursor movement is also something the mouse can take advantage of that absolute positioning systems cannot. Until we have our Leap sensor we won’t know how accurate it is, or whether we can apply gain to its manipulation.
Designing for an Ever Changing Future
The world is progressing at an exponential rate. It will become increasingly important that our new products with new input technologies teach us as we use them. Applying these principles can help us take steps toward building a more preferable future. A future where people can quickly learn new interfaces that enable self expression and creativity.