Keep It Simple: Thoughts on Designing for Augmented Reality

We know that mobile is here to stay. But as a designer, I ask myself, “What now? What will be the next big thing?” What if designers could take their interface designs and put them in the real world? That would be pretty cool, right?

Well, the ability to have a Mission Impossible interface has arrived through what are called “first-person” user interfaces.

So, what are they? Luke Wroblewski describes first-person interfaces as the following:

“First person user interfaces can be a good fit for applications that allow people to navigate the real world, “augment” their immediate surroundings with relevant information, and interact with objects or people directly around them.”

Predictions show that first-person user interfaces will one day be as popular as graphical user interfaces. These interfaces leverage the affordances of mobile devices, such as GPS, camera and Internet. Luke’s graphic below demonstrates that this interface is emerging in the industry and how it stands in relation to other types of user interfaces. This graphic suggests that mobile devices are in a period of growth, while wearables are an emerging trend.

Source: www.lukew.com

So what should a designer focus on when creating these types of interfaces?

First person mirrors your perspective of the world. Luke says that these types of interfaces are typically utilized for applications that involve a GPS or navigation. We are taking the natural interface and using heads-up display (HUD) elements to emphasize the first-person view.

He also suggests that a designer should use colors and graphics that more accurately match real surroundings. This is vital to the user experience since they are interacting with the world as they experience it. As always, when designing for mobile, designers should consider gestures that are typically used for that device. For example, if the standard action is swiping, then use a swipe action.

What are the typical HUD elements found in first-person interfaces? Luke says that data windows, markers, orientation, transparent UIs, and spatial grids are utilized within the HUD.

Let’s look at the Layar app. Basically, it is a free augmented reality application that displays digital information on top of what you see through your camera.

 

Source: http://5election.com/2010/06/18/augmented-reality-a-short-history-of-mobile-ar/

The designer needs to consider the negative space that occurs when designing the HUD elements that relate to the background elements. For example, in the Layar app, the underlying imagery is so versatile. Additionally, the graphical HUD elements should be non-photographical and flat on top of the real-world imagery. Augmented reality systems superimpose graphics for every perspective and adjust to every movement of the user’s head and eyes. We see in the upper right area of the Layar interface an indicator that displays in which location the user is currently.

Imagery and content also reside within the space in order to help inform the user about that specific location. For example, we see that the Layar interface has direction symbols and contact information that appears once an icon is selected. In addition, ambient light/dark environment awareness on the device is important and must be considered to help resolve this problem. The Layar app fades the icons that are receding in space and has a full opacity to the icons that are close to the user. It is important for the designer to have minimal navigation and use transparency in order to convey the information to the user with ease.

Below is another app, Nearest Places, (which has since been swallowed into Augmented Reality Browser).  This utilizes appropriately sized targets and maintains spacing between the targets very well.

I want to take note of the great use of contrast between the brightly colored icons and the dark backgrounds of the markers. Since this app is in a city setting, the designer imagined that the background environment would likely appear textured and muted. This is why the solid, bright, non-textural markers and icons work so well within the application.

You can also see that they focused on the navigation first, then the interaction design, which is vital to the success of the application. The capabilities of the device were considered such as device positioning, motion, orientation and proximity. The app shows the user’s position and direction, which is a great use of an extension of the user to the device.

 

Source: https://itunes.apple.com/us/app/new-york-nearest-subway/id323100520?mt=8

I find that iconography is so vital to the success of this type of interface. If designers don’t focus on the importance of symbolism and research in semiotics, the success of this type of interface could be minimal. For example, the amount of research that was conducted when designing the highway road sign was massive. I have no doubt that we will need to do the same with first-person user interfaces so that there is great clarity for the user.

The beauty of a first-person interface lies in its ability to directly translate real-life perspectives. After reading about first-person interfaces, I remembered the old saying KISS. Simplify, simplify, simplify.

A designer needs to remember that content is first when designing for mobile and to shed unnecessary details. Think of the device as an extension of yourself.

 

Float Symposium Update

It’s now free, as part of the entire Techweek program. More information on speaker topics floatlearning.com/symposium

Upcoming Float Symposium


Iona’s sister company, Float Mobile Learning, will hold its second annual one-day symposium on mobile learning this summer in Chicago.

As you may know, Float developed from The Iona Group in 2010. Iona has been in business since 1984, and one of its core practices throughout that period has been eLearning. Though similar, mobile learning is not just eLearning ported directly to a mobile device. Recognizing this difference, Iona branched out in 2010 to start a new venture called Float. Float, under the guidance of managing director Chad Udell, has worked with Fortune 500 companies and industry-leading companies such as Caterpillar, Pioneer Hi-Bred and Wiley Publishing.

This year’s Float Mobile Learning Symposium will take place at the brand-new startup incubator 1871 inside the Merchandise Mart on Monday, June 25, in conjunction with Techweek, an event attended by thousands of people interested in technology. The Symposium will feature experts from outside organizations such as Groupon’s Shay Howe, mobile experience author Steven Hoober, and Aaron Silvers, the chief learning officer of Problem Solutions. Our speakers have expertise in mobile design, development, and strategy work.

The intent of this Symposium is to bring current and future thinking regarding mobile learning to organizations. Float wants to continue to spark interest in how best to get your organization to think about mobile learning. The Symposium lasts a full day, half of the day focused on business and strategy with the other half focused on design and development.

The price for the Float Mobile Learning Symposium 2012 is just $79 through Friday, May 4. After May 4, the price increases to only $99. The registration fee covers entry into the event, as well as a continental breakfast and lunch.

Click here to view more information about this year’s Float Mobile Learning Symposium.

Importance of Prototyping

Here is an image from “paper prototype” testing we performed recently for an experiential learning system we are building for the National Sequestration Education Center (NSEC) in Decatur, IL.

The user testing was for the Sequestration Technology Educational Learning Array (STELA) we are developing for the NSEC at Richland Community College. Sequestration? Basically, it is a relatively new technology that captures surplus carbon emissions from coal-fired power plants or ethanol plants and stores this material safely underground. STELA will show visitors how this process works and why it is important.

Building interactive experiences that entertain and engage their audiences is always a challenging job. Quite a bit of careful planning and work goes into strategy, graphic design, content creation, moodboard development, wireframes, graphical user interface design, etc. At The Iona Group, we place a high value on audience feedback and testing. We schedule rigorous testing sessions at several points in the design and development process to make sure that users “get it” and are able to explore the material a fun and intuitive way. Invariably, our test participants give us valuable direction and surface new issues we realize we have to solve.

Test participants last week ranged from 8 years old to 30 something. One thing was universal; participants were driven much more by graphical information than written instructions. With every testing session we conduct, I realize how true this observation is. People want to be excited, jump right into an experience, and be entertained and engaged. If they happen to learn something along the way, that’s OK. Having said that, we understand this reality and have built many installations that have engaged audiences and achieved their intended leaning objectives.

Several of our assumptions for game play were upheld by our testers. Most users were able to move through the experience without issue. The need to move some of the elements to different locations to increase overall interaction became apparent.

It is far better to discover this now than after we have built graphics, animations and programming that need to be revised. Paper prototyping is almost always a bumpy ride but it’s a great discipline that pays off with good insights that make the rest of the development process go smoother.

Five Ways to Improve Your Graphic Design for Mobile

Over the past few months, I have been researching and practicing mobile design from a graphic designer’s perspective. As a graphic designer, I wanted to learn about the constrictions, features, aesthetic direction, specifications and constraints when working on a mobile design. Through creation, online classes and research, I discovered how to better create effective and innovative mobile designs.


1. Aesthetic and Layout

As a designer, I’ve learned that my choice of metaphor dramatically affects the design aesthetics of the interaction. It brings meaning to the design. Interactive elements should be visible, recognizable, reactive (i.e., elicits feedback), safe, and consistent.

A good mobile design does what the user expects it to do and has positive responses to the interactions. I think it is important to consider what the audience desires or likes through an emotional connection with the design. The look and feel should be consistent and designers should obtain any corporate branding materials from the client during the definition phase of the project.

Jen Gordon of Tapptics suggested that when selecting a color palette, a designer should consider the variety of environments for the user when they are using a mobile device. Designing high-contrast elements are helpful in legibility and readability. She said that a designer should have a character count limit for the body copy. This will help keep the design layout less cluttered during the development of the app.

Jen also noted that a four- or five-column layout is best for an app. The reason is because the standard iOS tab bar on the bottom can have no more than five icons. It is best to create columns for every tab element that a designer has within their design.


2. Process

I found that Brian Fling’s mobile design process is best suited for a graphic designer when designing for mobile. It also was clear and concise. I have listed Fling’s process below:

“IDEA The first thing we need is an idea that inspires us.
NEEDS & GOALS Identify a basic need with our desired user.
CONTEXT The circumstances where information adds value.
STRATEGY How we can add value to the business.
DEVICE PLAN Choose the devices that best serves our audience.
DESIGN Create a user experience based around needs.
PROTOTYPE Test the experience within the context.
DEVELOPMENT Put all the pieces together.
TESTING And test, and test, and test some more.
OPTIMIZATION Reduce all assets to its lowest possible size.
PORTING Adapt for other devices that fit our strategy.”


3. Features and Gestures

When designing, every screen should have a purpose and gestural interaction defined. It is important to plan out the gestures that the user will be using during storyboarding/wireframing. A designer should define the navigational flow when creating sketches. All of the sketches should be rough at first.

Next, a second round of sketches should be created within a grid structure that has more detail and written definition. I have found that these second round of sketches can be used within a basic prototype.

As a result, the navigation and user experience issues that occur later on in the process can be address before any art is created. Through the creation of a prototype made from sketches, the usability can be tested before any graphics are created within Adobe Photoshop or Illustrator.


4. Guidelines/Specifications

Because there are so many different platforms and devices when designing for mobile, it is important to know what the specific platform and device will be for the end product. As a result, I’ve come to know that a designer needs to know the dimensions, orientation, sensors and inputs on the device. They must know if it is iOS, Windows or webOS. These choices are going to affect the size of the assets for a developer and overall resolution.

I discovered that when complex designs are displayed on different mobile devices, the limited color depth on one device could cause banding, or unwanted posterization within a graphic. From a production standpoint, I realized it is good to use Adobe Photoshop with vector smart objects that are from Adobe Illustrator. It is best practice to use vector shape layers rather than bitmaps. If bitmaps have to be used, it is important to recreate create them as shape layers or smart objects.

Here is a list of the specifications for various iOS devices:
For iPhone Retina Display
640px wide
960px high
72 ppi

For iPhone Non-Retina Display
320 px wide
480 px high
72 ppi

For iPad (portrait)
768 px wide
1024 px high
72 ppi

For iPad (landscape)
768px high
1024 px wide
72 ppi

Source:http://bjango.com/articles/extrapixels/

I discovered that it is so important to design everything at the smallest file size first then the largest size to maintain fine details and prevent problems with composition and limited real estate. It is also best that when prepping file for the developer, you need to make elements the size they are going to appear in the final application. For example, a vector smart object in Adobe Illustrator should be the final size it will be exported as in Adobe Photoshop.

There are some good practices to follow when saving out files for a developer. For example, leaving a pixel buffer around images when saving out images will allow for better rendering of graphics. I also have come to know that PNGs are the best file type for all graphics when providing them to the developer. The reason is when any other file format is utilized, the iPhone has to do the same processing that Xcode does but it is doing it at run time rather than build time. Basically, the app will run slower if anything other than PNGs are utilized.

For iOS apps, designers also need to save out app icons in various sizes for developers. Apple’s iOS Human Interface Guidelines explains them.


5. Usability

As with any design project, the user and target audience need to be a major factor during the entire phase of the project. When designing for mobile, designers need to consider the persona of the user and when will they use the device, how often they will use the device and where they will use the device. It is important to remember that unlike the iPhone, iPad users won’t always have Internet access.

In addition, it is more effective to design a website for mobile first and then design for the desktop. “Users don’t complain about lack of features,” said Red Foundry CEO Jim Heising. “Users complain about features that don’t work.” Designers need to compress content where possible, and avoid providing unnecessary content.


I’ve learned that a designer needs to create and test the app during the sketching and wireframing in Adobe Photoshop. There are only two states when creating buttons, an inactive and an active state. Test designs on the device it will live on at the end of the project. I’ve used FieldTest and found it very efficient and effective. A designer needs to consider the tactile interaction design and use the icons and buttons that are recognizable in mobile design. This makes the usability more efficient for the user.
Page 3 of 10912345102030...Last »