What is Zero UI and What Does it Mean for Mobile?

Down Arrow
Profile

Posted by Bojana Lazarevska on 12 Dec, 2019

Today, we hardly go one minute without interacting with a screen.

The majority of user experiences are determined by a click, tap, or scroll. Our smartphones, in particular, are our main source of connectivity, entertainment, and information.

But what if we lived in a world where we could interact with our devices without touch? That’s a zero UI world. Like Google’s CEO, Sundar Pichai, declared, the future of devices is actually the end of devices:

 

We will move from a mobile-first to an AI-first world.

 

What Even is Zero UI?

In its simplest form, zero UI means that you can interface with a device or application without a touchscreen.

With the rapid uptake and increased sophistication of IoT-driven devices, touchscreens will eventually become obsolete. Zero UI will allow us to communicate through ways that are more natural for us, such as voice, movements, and even glances.

Smart speakers like Google Home, Apple Siri, and Amazon Echo are already interacting with users without a touchscreen. Companies, like Magic Leap, are even creating mixed reality (MR) experiences, whereby users’ movements determine how they interact with the device.

The goal of zero UI is for devices to communicate with humans in a “language” that we understand, rather than us bending to learn theirs.

 

Beyond the Screen

Since zero UI is a move away from physical interactions with our devices, the aim is for users to interact with them in ways that don’t include touch.

Technologies leading the change toward a zero UI world include:

 

Voice-Based Interfaces

Photo by Sebastian Scholz (Nuki) on Unsplash

 

“Ok Google, what’s the weather like today?”

Most of us are familiar with voice-based interfaces, in one way or another. IBM was the first to introduce a voice assistant with its Shoebox device. But it wasn’t until the introduction of Apple’s Siri that modern-day voice assistants as we know them really started to take off.

Soon, Amazon, Google, and Bixby by Samsung, followed in its footsteps. The reason behind the popularity of these interfaces is clear. Speech is a much more natural, human way of communicating. Because of this, however, consumers’ expectations of how voice should work are set high. They expect a certain level of fluency in human nuances and intricacies.

With voice-based assistants, businesses have more opportunities to inject personality into their brand. The more “human” a brand makes their assistant, the more likely consumers are to relate to it and form loyalty. Advancements in machine learning and natural language processing are making this increasingly easier.

At the moment, however, voice control is quite linear. Our interactions with voice-controlled devices are fairly simple. We can ask Amazon’s Alexa a question but we can’t ask Alexa a question, tell it to do an action, and then ask another question in the same go. For these devices to mimic how humans interact in real life, they will need to become much more multi-dimensional.

 

Gesture Control

In 2015, Google announced Project Soli, a chip that allows users to gesture above devices to control them.

The aim of Project Soli is to make hands the only user interface you’ll ever need. Google Pixel 4 has now integrated this tiny radar chip to detect motion. This means it’s possible for users to swipe or wave to complete certain actions, which can come in handy if they’re running on the treadmill, for example.

Google’s Pixel 4, image from Popular Mechanics

 

The Pixel 4 is Project Soli’s first major, commercial implementation. Google has referred to it as “Motion Sense” and it has three types of interactions:

Presence – When the phone is facing up or out, the chip has the ability to sense when the user is there, within around a foot or two. If the user is not nearby, the display is turned off.

Reach – If the phone sense you’re reaching for it, it will turn on the screen and activate the face unlocking sensors.

Gestures – There are only two gestures that are recognised by the chip. These include swiping and waving. You can give the phone a quick wave to turn off or snooze an alarm. You can also swipe left or right to control music, like skipping a song in the queue.

Motion Sense is still in its infancy, so it will be interesting to see this type of technology progress. Although it doesn’t seem as if it accomplishes much, that’s actually the whole point, according to Brandon Barbello, Pixel Product Manager at Google:

 

It isn’t that it’s so much better that you’re going to notice it. It’s that it’s so much better, and you’re not going to notice it. [You will just think] that it’s supposed to be this way.

Brandon Barbello, quote from The Verge

 

The Impact on Mobile

Advancements in voice technology, AI, and motion sensors suggest we have undergone a significant shift in how we interact with technology. This progress has led to a whole set of new questions, such as how will this impact mobile and in particular, apps?

App-dependant businesses may need to shift their approach. The movement toward a UI-less environment doesn’t necessarily mean that apps are dead, however. It simply means that the way apps are designed and built will need to change to reflect this movement.

Apps will need to evolve into becoming a service, meaning they need to be intelligent, purpose-built, and informed by circumstances like location, hardware sensors, previous usage, and predictive computation.

As well as this, the usage of messaging bots will significantly increase, and businesses should take advantage. We are now spending more time in messaging apps than on social media, which is an important turning point.

Zero UI also means that designers will need to rely more heavily on data and AI. Taking a non-linear approach to design means that vastly different tools and skill sets will need to go into designing apps. Instead of designing a predictable workflow that questions what a user might be trying to do in the moment, designers will need to think about what a user might do in any possible workflow.

Nest Thermostat, image from All Home Robotics

 

In the future, interfaces will become more automatic and predictive. A good example of this is the Nest Thermostat – you set it once, and then it learns to anticipate how you interact with it from there.

 

Where to From Here?

Although we’re not in a zero UI world yet, businesses need to be forward-thinking about creating seamless experiences for users. With a push toward higher technological integration, it’s important that businesses reflect on their current mobile strategy and how this may impact on them.

You can do this by conducting a mobile health check and taking a holistic view of your current mobile solutions. At Future Platforms, we have designed a seven-step mobile health check, specifically for businesses. This helps businesses prepare for the next decade of mobile, not just the now.

We can look at your existing offerings and make informed decisions tailored to your unique business needs. Get started with a free consultation today – find out more here.

Get in touch with Future Platforms

CONTACT US