interface

Interactive design for all – 3. Interface

25 September, 2013

3. Interface

Hello again. In this post I will explain how to understand an interactive interface and what tools we have to complete a correct user experience. I’ll start with the interface definition according to the RAE: Physical and functional connection between two devices or independent systems. It is the medium of connection between an electronic device and a human; either screen, keyboard, sensors, … It is at this point that the definition of graphic designer is expanded not only to the visual design. Nowadays a interactive designer must understand beyond graphics, he/she must know all interactive capabilities current devices provide us.

The interface joining the user and device should be designed to help its use. If we create an interface difficult to understand, we confuse users that stop using the interface and can make bad reviews of it . Clarity and focus are key. Calls to action should be well marked in all small or large devices. Following this steps our users will be guided properly. We can predict user decitions and help them adding information input examples online. Devices today throw events when reading eyes movement to stop/continue a movie or move up/down a web page. Knowing about these techniques we are aware of a wide range of possibilities to provide rich user experience. If you want to continue reading about this, I recommend you to read on augmented reality and its applications. The natural user interface  is the way we interact with devices without the use of input devices, but instead using gestures or body movements; touchscreens or game controllers like the wiimote or kinect.

As interface designers cannot control the making of devices, nor the buttons on them. What we can control is how our inputs should be displayed on screen and allow its use. We must give importance to the size and position of these inputs; buttons, menus, … but it is also very important to understand the user and how is using these elements. Drag is not the same as click nor is move. That is why we should know the dictionary of gestures that a connection terminal can process. It is also important to illustrate properly the use of the movements; in many cases we cannot display a tutorial of use as happens in console games. Following the above reference, drag is a more difficult to recognize gesture than a click. How to explain then; shaking the terminal connection erases the content of the form on screen?

By creating our interactions on screen, we must be aware of the size to be used for such elements. In case of a mouse; the buttons should be large enough see, point and click. Same happens with the finger; the size of the boxes and spacing between them should be respected and standard. MIT  recommends using between 8 and 10mm button size. Previous size is real, and it is different from the size in pixels, because each screen has its own size of pixels. To fix this we should use the default sizes of interpreters and browsers, changing dimensions to this basis; em or ex are useful in web design.

Think also on the variety of effects that we can use but keep in mind that these may be lost from terminal to terminal. A clear example is the hover effect in web environments; different events for smartphones than computer screens with mouse. I recommend avoiding :hover and :focus.

Forms are an important part of an interface. It is necessary to provide the user with help on inputs. For this we can use modal boxes with instructions online. We may also use explanatory text what when focused disapears to allow filling them. I recommended simplicity on our design to avoid overloading form information.There are other helpful techniques such as text completion, auto-correction of fields such as emails or automatically converting text to uppercase.

In web environments, we can get the visible screen size of the connection terminal using a HTML viewport tag and CSS media query. These tools can help us in managing the differences in pixel density (ppi). Pixel density impacts on how big or small we may see an item on screen. The more pixel density, physically smaller they are. It is also helpful to use these tools to add, alter or delete items according to the condition of the screen. An example of this can be web menus.

Finally we should also think about how the device will be experienced;? 1m away from the user? In a table? With keyboard and mouse? Is the interaction done via remote control devices? Monitor on the wall? Projected above glass? In many cases it is prefered to design and create different interfaces for many devices, although more expensive.

I would like to recommend reading about adaptive design. This will help us understanding the limits and importance of displaying our content properly.

In the next post I will talk about the main target of our interface design; the user. In it I will explain how we must see our user when using our interface, so we can provide greater ease of navigation and positive experience for him/her to come back.

 

image by http://madebyvadim.com/

    Tags: , ,

    Leave a Comment