The new UX: Voice, AR, and an evolutionary customer experience

Voice is the new, primary interaction for most modern devices. Smart speakers like Amazon Echo and Google Home allow you to interact entirely with voice commands. Even most smartphones include a virtual assistant option, affording you hands-free and voice-enabled controls.

The shift to voice-controlled devices presents a new series of UX and design philosophies that require a general understanding of natural language and communication. It is comparable to how mobile touchscreens and touch interfaces transformed UX design. The difference, of course, is that voice interactions are much more hands-off.

Consumers expect to be able to talk to their devices and platforms and receive the appropriate response or interaction. Telling an entertainment system to play music, for example, isn’t just about playing random tunes: It’s about playing the artist, song or genre someone requests. You could argue voice interactions are much more nuanced than other interfaces.

Voice isn’t the only source of transformation in the industry, however. Augmented reality and virtual reality platforms also require a unique form of UX design. Unlike the VR tech of years ago, this time the platform is really making an impact and permeating across multiple channels. You might have a VR experience on smart TVs, entertainment and connected media systems, smartphones or even computers and mobile devices.

As Generation Z continues to become a larger percentage of the consumer market, companies have seen the positive benefits of mixing technology and entertainment to create an engaging customer experience.   So, in the modern landscape, UX designers must know how to develop and work with voice interfaces, as well as AR and VR platforms too. My, what a world we live in!

Next-Generation UX Design

With each new generation of interfaces and control schemes, the goal remains the same. We, as humans, are trying to find more effective, reliable and faster ways to interact with technology. Keyboards, mice and even touchscreens are slightly more sluggish for direct interactions.

ALSO READ
What AR/VR means for Hollywood (feat. Roy Taylor)

The largest obstacle has always been the communication aspect. It takes a lot of processing power to collect speech and voice interactions, and then translate them for use with inherently visual interfaces. Luckily, we’ve reached the point where the computational power is not just available, but highly accessible, which is why we’re seeing a slew of voice-enabled devices entering the market. Cloud computing and remote technologies also make this possible, as the processing and computations get handled in the cloud, as opposed to locally.

So, rather than looking at this as a new direction, it’s more akin to the next step in the evolution of UX design. In other words, voice interaction was virtually guaranteed to be part of the natural journey to better, more advanced UX interfaces. The same is true of VR, which is merely a more immersive form of traditional visual interactions.

What Do Voice and VR Mean for Designers?

For voice-enabled UX design, words and speech matter more than ever. For VR and AR, visual interactions and digital environments are the focus. Compared to traditional UX design, which incorporates 2D visual elements and almost flat surfaces, VR and voice both transcend this approach.

With voice, you must be able to accurately understand what the users are going to say, and what this means to them. The latter is important because words don’t always mean the same thing to different people, and can be interchangeable.

VR is arguably simpler, in this regard, because you’re still focused on a visual experience — though auditory experiences come into play, too.

Unlike standard interfaces, you cannot rely on visuals or image content to articulate processes and labels. Animation won’t work, either, especially when it comes to communicating complex concepts and actions. The modern capabilities of technology call for the development of entirely new experiences and responses that deliver easily understandable cues to the audience.

ALSO READ
How IoT is changing prototyping

Perhaps the most difficult and alarming change for all UX designers moving to voice-enabled platforms is that the fundamental visual elements — like a clickable link or image — are no longer valid.

Comparatively, VR and AR are similar, except they still rely on visual elements. With these platforms, the focus is more on how that translates into the user’s physical and environmental surroundings.

User Intent Takes Center Stage

With both platforms — voice and VR — it’s less about what the user is saying or doing specifically, and more about what they intend.

Saying “delete” or “remove,” for example, can mean different things in various scenarios. Maybe you want to temporarily disable or remove something from view, but not delete it completely.

Compare that to gestures and interactions in VR. Swinging your arm in an arc, for instance, can have varying uses and meanings. For example, you could be swinging a virtual object or element, or gesturing someone else to step aside.

With modern UX design, user intent takes center stage and becomes your most important concern. What does the user want to happen? What are they trying to achieve?

As we move into the future of UX, the option to maintain engagement with varying stages of interactions and commands becomes crucial to delivering a positive and convenient experience. More importantly, the act of deciphering or translating those commands on the fly becomes so much more instrumental.

In this way, the customer experience of the future is worlds different than it has been in generations past. Are you ready for the change?

About the author

— Nathan Sykes

Nathan is a technology and business writer interested in the impacts of new technologies on enterprise and society. Check out his blog to stay up to date with his latest articles.

Product recommendations


You may like