07 Sep

iOS 8, iPhone 6, and the Future of Interaction Design

Apple has often been at the forefront of interaction design and when they introduce whatever they’re going to introduce on September 9th, I think it’s safe to say that there will be some people who are thrilled at the new products, some people who are less than overwhelmed, and a lot of people who will miss the most important long-term implications.

One way to illustrate this is to look at the introduction of the new “flat” design of iOS 7. Phasing out skeumorphic design elements and introducing a visually more simple interface elicited a lot of comment. Some thought the change was a welcome departure from overly-designed UIs, while others labeled it a shameless knockoff of Windows 8.

But what I had thought was particularly interesting was the degree to which people referred to the new design as “flat”, when it was anything but. Yes, visually the UI elements on the screen lacked dimensionality. But what replaced it was a distinctly multi-layered stack of UIs, and a slight shifting of the gestural interface to enable people to access a much deeper set of content.

Think of it this way: iOS 6 and its predecessors were an increasingly ornately decorated cake – icing, frosting, and sprinkles of every shape and color. iOS 7 has replaced that frothy, whipped sugar stack with a croissant. On the surface, the croissant seems less complex, an over-simplification, a loss of essential detail. But take a bite: it’s multi-layered and textured in a way no mere birthday cake could ever hope to be. Rich, buttery, sumptuous and deeply, deeply satisfying. Just enough flour to hold the butter together.

Viewed this way, iOS 7 reveals itself as a framework for a much richer, deeper information stack. Stripping away the skeumorphic design aesthetic paves the way for an interface which more accurately represents the many layered ways in which you can interact with the information contained therein. Notice, particularly, the manner in which you switch between applications, or the motion graphics involved in drilling down from Home screen, to Folder (and in through multiple nesting sets, if you desire), to an App, and into the information hierarchy within an app using iOS 7-native design elements. The experience has become three-dimensional, in that you are now navigating deeper and deeper into an information hierarchy. And yet app-switching allows you to quickly zoom back up to the top level, and dig down again at will.

So, two questions arise:

  1. Is this terribly new?
  2. Does it matter, and why?

The answer to the first question is definitely, “not really.” The concept of providing navigable “space” within an UI and the ability to switch seamlessly between those spaces goes back at least as far as the NeXT Cube, SGI workstations X-Windows, and possibly further. But for iDevices, the constraint of screen real estate is a real concern, and a foundational element of the design aesthetic. Regardless of how high the resolution of the screen becomes, the physical size of the screen has some upper bounds if you plan on holding it with one hand, and navigating with the other. (Note: That doesn’t mean I think Apple won’t fiddle with the screen size of the iPhone on Tuesday. But you still have to be able to “get your hand around it,” as Steve Jobs once quipped.)

The answer to the second question is a bit more convoluted, but follows what I think is an important train of thought.

iOS 7 introduced a flat design aesthetic combined with a much more layered information design stack that has helped pave the way for what I believe will be the first commercially available and widely adopted 3D interface. Past attempts to introduce that kind of interaction environment have failed to gain traction not because the technology to implement them didn’t exist, but because the design approach failed to provide an easy path for people to travel along so that they could slowly become accustomed to the space in which they were navigating.

If we think of the interfaces that have been represented as “virtual reality” in the movies – Lawnmower Man, Disclosure, and Minority Report, just to mention a few of the more awful ones – we see a fairly common set of interactions: flat screens suspended in air, manipulated with hand-waving gestures, but ultimately no more interactive than the 2D interfaces we have now. There’s just more of them, scattered about our visual field.

For people to perceive a 3D interface as intuitive, the interaction design needs to obey a set of real-world principles with which people are already familiar. Weight, volume, and placement (i.e., order) within space are essential concepts that most humans have mastered by the time they can stand upright, so that’s as good a place as any to start.

The first two of these are always going to be very, very challenging to represent in a digital environment. No matter how high-definition the screen, it’s hard to communicate a sense of volume and mass with purely digital information. You can trick the brain a bit using certain physics models, but without proprioceptive feedback,  that’s hard to do convincingly.

But placement in space is another matter, and it’s frankly more essential to how our brains work when they’re processing information. We are mapping animals, and our evolutionary success is due in part to our fantastic ability to remember places we’ve been (and, for example, whether the food there was tasty or poisonous.) When you have an Interface that represents the layers of real space, and a set of interaction methods for navigating that space with which people are familiar, only then do you have the means to provide a truly immersive and usable 3D environment.

And for about the last 12 months, Apple has been training people how to interact with that space. iOS 7 is a leap forward in interaction design not because of what it looks like or what it does, but for what it is preparing people who use it.

A device (handheld or wrist-worn, it doesn’t matter) which makes use of this training is one which can seamlessly introduce interaction concepts such as:

  • Truly 3-dimensional displays, which show depth and stacking of objects (either transparent or opaque)
  • A set of 3-dimensional gestures that seem intuitive, but only because they mesh what we’ve long since known how to do, with 2D gestures that we’ve had a year’s worth of practice using.
  • An interface that expands way, way beyond the screen. When the presentation and layering of information maps to your metal model of the information space you’re working within, it doesn’t matter what’s actually showing on the screen at any one point in time. The physical gestures you’ve used to get to that particular screen are *synced* with how your brain naturally navigates a physical space, so you’re more likely to remember what isn’t being displayed to you at a particular point in time.
  • Touch-less gestures. Think Leap Motion, but embedded in the device, such that elements which appear to be floating in space can be “grabbed”, “twisted”, “pinched” and “tossed”.

Whatever Apple does or doesn’t introduce on September 9th, it seems reasonable to me that it’s but one more step on a long journey towards introducing interaction models to a mass audience in a way which seems natural and intuitive precisely because it’s being introduced slowly, deliberately, and with the long game in mind.