The debate regarding the effectiveness and wisdom of contextual user interface design has been getting more heated over the past decade. As program complexity expands and users developer greater technical savvy, contextual interfaces seem inevitable. But are they a good thing, or just a way to obfuscate already complicated systems?
Contextual interfaces are no easy thing to apply. Imagine the chaos that would ensue if your car changed its control configuration as you accelerated to freeway speeds. There are appropriate applications and many more inappropriate applications. How can we tell the difference?
Microsoft office has been employing contextual interfaces for years and doing an abominable job of it. More users are frustrated by the changing menus than benefit from them. This proves the point: Contextual interfaces are a viable tool for hiding complexity but doing so inappropriately yields more problems than it solves. The root of the cause runs very deep: Contextual interfaces can work, but only when designed into the system from the ground up. Office’s bolt-on context menus hide complexity for the sake of hiding complexity. This actually makes the system more complicated, not less.
So, does this mean contextual interfaces are a bad thing? If all they do is hide complexity, doesn’t that always lead to a negative result?
Read Write Web published a short article attesting to Apple’s contribution toward user interface design, citing the use of contextual interfaces as a key element to Apple’s success:
Steve Jobs and his team know this all too well. Apple’s UIs evolve constantly, taking on new forms and seeking simpler ways of delivering a superior user experience. What is remarkable is that you always know how to use Apple’s products. I watched this over and over again. From my 4 year old daughter to my 83 year old grandfather, everyone I know could use an iPhone right away. iTunes has so few buttons that it is impossible not to know how to use it. And so does iPhoto and every other program developed by Apple.
In addition to simplicity, Apple has for years been using a contextual approach to user interfaces. Apple widgets react to user gestures by changing shapes and presenting more options only when it makes sense. And the latest web applications have got the contextual bug from Apple.
Lets look at a concrete example: Aperture, Apple’s professional photo management software. When I’m working with my entire library, Aperture gives me tools appropriate to managing groups of files. I can create new projects, drag-and-drop sets of photos from one project to another, and the like. When I switch to photo retouching mode, most of those options disappear or are conveniently deprioritized, while photo editing tools become the focus. Likewise, what little contextual switching occurs in the menus is very limited. When a menu item is disabled, it’s actually informative. For instance, Pages disables image editing menu items until an image is actively selected. But all of this is just the practical evidence of something much deeper. It’s not obvious, but it’s there: Good interface design.
This is the point on which so much contention lies: What is a good interface? As Thom Holwerda points out on OS News, how can contextual interfaces help an application that has “ten billion million features:”
For a small application such as an audio player or note pad application it’s easy to only show the most used features – but go to anything more complicated, and you are sure to hit problems at some point. Something like Word or PowerPoint has ten million billion different features, and millions and millions of users – how on earth are you going to determine which of those ten billion million features are the ones you want to show by default, and which are the ones you wish to hide? A common saying thrown around on the internet is that 90% of the people use only 10% of Word’s features – but the problem is that those 10% are different for each individual user. So, what to do you present as the few default options, and which do you hide only to be revealed upon user request?
The problem with this argument is that it relies on faulty assumptions. First of all, I disagree that those 90% of users are all using different features, and Thom is misunderstanding the statistic. The fact of the matter is that Microsoft Office takes an “everything but the kitchen sink” approach and includes many features that are only important to a couple percent of the user base. Does it make sense to degrade the experience of 80-90% of an application’s users just to satisfy the 2 or 3% that want to be able to insert an equation? For that matter, is it appropriate for Microsoft Word to generate mailing labels? Just because everybody does, doesn’t make it so. On Mac OS/X Address Book does all things address-related, including printing labels and envelopes. It’s a lot easier than running mail merge operations with Word. The result of this design is that Microsoft’s attempt at contextual user interfaces serves to make things more complicated and hence, more frustrating.
Good design is usually synonymous with simplicity. While I use PowerPoint on a day-to-day basis for our coursework development, I would hands-down prefer to use Keynote. Somehow, Apple has managed to pack every single feature that I care about into Keynote, even though it doesn’t have all the features of PowerPoint. It’s consequently easier to use, does what it does better (as opposed to doing a lot of things poorly), and lets me get my job done more easily and quickly.
Complexity for the sake of including every feature leads to bloated, complicated systems, often systems that are so complicated they fail to accomplish their original goals well. Simplicity is not synonymous with the removal of capability, and good user interface design does not mean taking away capabilities. It means rethinking them, and I for one am grateful to Apple’s contribution in the space.