• Rob George

User Interface: Reducing Work & Eliminating Excise + Metaphors, Idioms, and Affordances (Week 6)

This week is on chapters 12 and 13 of About Face: The Essentials of Interaction Design by Alan Cooper, Robert Reimann, David Cronin, and Christopher Noessel.



Designers should decrease the amount of work the user needs to do to achieve their goals. There are four types of work that the user will perform:

• Cognitive work—Comprehending product behaviors, as well as text and

organizational structures

• Memory work—Recalling product behaviors, commands, passwords, names and

locations of data objects and controls, and other relationships between objects

• Visual work—Figuring out where the eye should start on the screen, finding one

object among many, decoding layouts, and differentiating among visually coded

interface elements (such as list items with different colors)

• Physical work—Keystrokes, mouse movements, gestures (click, drag, double-click), switching between input modes, and number of clicks required to navigate


EXCISE TASKS: These tasks don't contribute directly to reaching the goal. They are extra work that helps satisfy the needs of our tools and the outside agents of our goals.

PROBELM: The work we do doing these tasks doesn't help us get any closer to our goals. Designers need to reduce the amount of excise tasks to improve the user's usage and experience.


Navigation tools help reach their goal but somewhat gets in the way of quickly getting them to their goals. You don't want the user to be switching through pages or windows to get a task completed. It will cause them to get frustrated and confused, losing track of what their goal really is. The use of panes can help organize important functions on the main window by displaying it all at once on one screen so they are only a click away. But panes can become a problem when they are not placed correctly on the screen based off of the user's workflow.

As for tools and menu navigation, tools should be easier to navigate than the menu because they are more frequently used. Grouping them on the screen in easy to access tool bars, palettes, etc. is the best way to go. A great example the book mentions is Adobe Photoshop's toolbar and how it groups related tools in subcategories..

Navigating information on the screen can be done by scrolling, linking, and zooming. Scrolling is a necessity but the user shouldn't have to rely on it often. Linking is critical navigation on the web, and panning/zooming is used for exploring 2D and 3D information.


Skeuomorphic excise is the use of old-style mechanical representations in our new digital environments. In other words, we use symbols and visuals of old technology to represent the function of features in our digital world. They help "make it easy to understand the relationships between interface elements and behaviors."

A great example that the book provides of how this can be a problem, especially for screen space, is the iPhone's newsstand from versions 4,5, and 6. How it represents news on an actual newsstand takes up too much screen space that is distracting and gets in the way of the actual news they are presenting.


"One of the most disruptive forms of excise."

Modal excise is when the software interrupts the user flow because of a software situation. I really enjoyed how About Face gives the example of when a software will state that a file doesn't exist "merely because the software is too stupid to look for the file in the right place, and then it implicitly blames you for losing it!"

Most modal excise examples would include error messages, notifiers, and confirmation messages that are unnecessary for the user to have to interact with. They usually state things that the software should be doing automatically without interacting with the user. Others include making the user ask permission to do something. To make a change to something such as an address, the user would have to use another screen to make the change.

What can be helpful is having these messages pop up with the option to never display the message again. That way new users who need them have them available, and frequent users can discard them.


This is when a user has to "find a single item in a list." Where they have to dig to find where to start reading information on the screen or figure out what is clickable and what isn't.


Ways to Eliminate Excise:

Reduce the number of places to go.

Provide signposts.

Provide overviews.

Properly map controls to functions.

Avoid hierarchies.

Don’t replicate mechanical models.


About face lists some ways you can reduce the number of places to go:

Keep the number of windows and views to a minimum. One full-screen window with two or three views is best for many users.

Limit the number of adjacent panes in your interface to the minimum number needed for users to achieve their goals.

Limit the number of controls to as few as your users really need to meet their goals.

Minimize scrolling when possible.


Important features can be found in the same spot with he same design and layout wherever the user is in the interface. Menus should constantly stay the same while toolbars can be more free. Users should be given the option to change the appearance of toolbars but it shouldn't be too easy that they will accidentally make this change. Some other signposts in an interface may be palettes and fixed data on a screen.


Mapping "describes the relationship between a control, the thing it affects, and the intended result." You want to make sure that your controls make sense to what the task they perform are. Have it represent what the action is and clearly define the action that will take place.



There are three most common paradigms which are; implementation-centric, metaphoric, and idiomatic.


These are designs that are "based exclusively on the implementation model." In other words, they show the inner workings of the interface. It is the literal interface itself and how it works. This type of design is usually meant for the engineers of the interface to understand. Not so much the user.


Interfaces that are metaphoric use symbols and imagery to connect interface actions to the real world. It helps make the interface more understandable for the user by helping them recognize the actions of the interface by using icons, buttons, and other representational visuals.

Examples of how metaphors are used would be phone contact books which list phone numbers like you would in a phone book or in your physical contact book, only the interface can add a contact image that you can view instead of the phone number. Or another example would be the home icons on apps which indicates you will go to the home page if you press that button. Similar to leaving your house and returning to see your home.


Designers should try to minimize their use of metaphors as it can get too complex and get in the way of the user reaching their goals. The exception is that it can be used a lot in video games and simulations to keep people more engaged with the digital world. About Face gives the example of Sunrizer, the iPad synthesizer that mimics the look and actions of a physical keyboard synthesizer. There are also ways to use less metaphor in games and interfaces like the abstractness of the synthesizer, TC-11, as mentioned in About Face. This app acts the same as Sunrizer but the user has to play with it to understand exactly what it does.



This type of interface is based on the use of idioms and how we learn to use them. Idiomatic interface designs, unlike metaphors, are visuals and actions that have no relation to the real physical world. To further explain this, About Face gives the example of how we can easily learn figures of speech like "kick the bucket" even though it has no relation to the actual meaning of someone who has died. It's the same way for interfaces. We can easily learn the movement of a mouse and it's cursor, even though it doesn't visually represent an actual mouse and its movements.



'The perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used.' - Donald Norman

When you look at something and immediately understand how to use it is called affordance. Manual affordance is when you can instinctively understand how an object can be manipulated with your hands. About Face also mentions that this instant knowledge of how to use something that is shaped to our hands is called intuiting an interface. An example the book provides is the use of physical buttons in the world. As humans, we have the urge to push buttons like doorbells, instantly knowing that by pushing the button something may occur when we do.


Manual affordances don't have any indicators of what the object is meant to do, but we understand it anyway. It's different for virtual objects. In an interface, a button or lever can do anything as well, but we never know exactly what it would be used for if there was no text or imagery to instruct us such as, a home icon or the word "home" on a button. The three ways we would find out how the button works is if we "read about it somewhere, ask someone, or play around to see what happens."


A way to solve this issue is to manipulate the affordance of a virtual object to represent manual affordance. The term pliant is used in About Face, which is referring to "objects or screen areas that react to input and that the user can manipulate." In other words, it helps hint that an object will do something if you press it.

Static Hinting

One way to hint at this is static hinting which is commonly used on mobile interfaces. Buttons or objects that have a 3D-like visual to them indicate that they can be manipulated by the user's finger.

Dynamic Hinting

Most commonly used on desktop interfaces, dynamic hinting is when the object's appearance changes when the object has been clicked on or "pressed".

Pliant Response Hinting

Pliant Response is basically a rollover effect on an object. When the cursor hovers over an object and is holding down the mouse button, the object will change it's appearance. Indicating that something will happen when you release the mouse button. Usually this involves a button becoming indented when the cursor is on it and then popping back to it's original state when released. Or when the color of the object changes and then returns to it's original color.

Cursor Hinting

When the cursor changes appearance as it hovers over an object. A commonly known example is when the arrow of a cursor changes to a pointer finger as it hovers over a button.

Thank you for reading!

View my User Interface Page for more!

23 views0 comments