The faceless interfaces

Posted 9 years ago
Return to overview
KITT

Siri, Cortana, Google Home, SlackBot, the Star Trek computer and in lesser extent K.I.T.T. These are all interfaces without an actual ‘face’. The Internet of Things connecting personal assistants such as Echo, Jibo or Zenbo with your home. I tend to call them assistants and not bots. I believe there is a nuance, where bots are more suited to perform the same repetitive task over and over again and assistants are more orientated towards user interaction with changing context and tasks.

Everything is connected: machine to machine, human to machine and vice versa.

Traditionally, we used to call ourselves frontend developers, to distinguish ourselves from the –in similar tradition- backend developers. We take pride in showing beautiful interfaces, for we are responsible for the most visible part of a digital product. A dashboard, control panel, web shop, blog or any other online medium. The front line.

That line seems to be losing ground. The digital assistants are gaining ground. And why not? I think the small tasks are perfectly suited for assistants. Queries such as: “what is my next appointment?” or “make this room warmer, please!” are more intuitive when posed as a questions than a tedious task from say, a smartphone enabled interface.

Note that the second command needs a bit more intelligence than the first. What room are we talking about? How warm is it now and what is an acceptable rise in temperature? The challenge also lies in assumption of what information a user would need in order to make a decent call. In the case of the room temperature, it might be handy to inform what the current temperature is. You might decide it is better to dress up instead of heating up.

So what about complex systems or tasks? It seems clear that, on the surface, simple tasks such as adjusting a rooms temperature is a lot more complex without seeing certain information. The more complex a task becomes, the bigger the challenges get. Systems would require a certain amount of intelligence and/or machine learning in order to succeed in fulfilling such demands. It is predicted that the job of personal assistant will become near obsolete within 20 years. I think this could indeed be the case.

Designing and developing such assistant interfaces calls for a decent understanding of UX patterns. These techniques are not all new: blind people have been relying on faceless interfaces for decades. The accessibility solutions used to revolve around making the regular content accessible. At the moment, that does not necessarily imply usable. The demand from a wide audience for these assistants, might have the benefit of empowering people who were relying on ‘assistantesque’ techniques for years.

Shifting towards audio orientated interfaces means, of course, a disadvantage for people with speaking or a hearing disability. We have a perfectly working infrastructure for that group of people: the very familiar web interfaces we see every day.

It seems a great opportunity to make the web equally accessible for all. I feel the term Front end developer doesn’t cut it anymore. I feel Front end implies something visible and that might not always be the case. I therefore opt for the term Interaction developer. The interaction goes further than developing a visible shell: it means developing the means of interacting with a system, be it a climate control system, scheduler or ordering a pizza. I would define ‘Interaction’ as ‘The activities a user perceives that need to be completed in order to perform a certain task in the most efficient way. ’

I think it is time we not only focused on the visible part of development but expanded our horizon to incorporate a broader range of clients. Supporting mouse, keyboard, touch, voice in our solutions. That is what the web is about: bringing accessible services for all.

Return to overview