When you’re a designer of conversational AI, you’re constantly anticipating. What will the human ask of my virtual being? And based on the answer they’re given, what turn will the conversation take? What might not be immediately clear in an answer, what might the AI need to explain? When (and it is a when, not an if) the human says something ambiguous, what is the best way to deal with that? And how does context, both that can be gleaned from the conversation in real time and that which is afforded by the domain, affect all of the above?
I’ve been designing conversational AI for more than 10 years and I’ve designed for all kinds of organisations with all kinds of clients and audiences. I’ve always kept in mind the audience when it comes to the character design of the AI and its domain of knowledge and how it will answer questions. But it’s in designing for people with disability over the last year that I’ve learnt how narrow my audience consideration was. Listening to people with disability talk about what they’re interested in, how they process information and what they need made me realise how I hadn’t been anticipating enough. In general. Not just for people with disability, but for every audience who interacts with an AI that I design.
It’s not only about imagining a conversation and anticipating all the ways it could play out in terms of content and what’s possible within the domain. It’s about bringing to the context who the human being is that’s going to be interacting with your system and how they move through the world. Are they better at visual processing or auditory processing? How will they react to a proactive AI that asks them how they are and tries to get to know them? Will it feel intrusive or will it be welcomed? Is reinforcement and repetition of information useful or annoying? In a step-by-step process, how many steps are optimal? When is it good to take delight in words and language and playfulness and when is it better to simplify language and concepts, in order to remove cognitive load? And how do you allow the human to make their own choices about these questions?
In 2020, Lister et al published Accessible Conversational User Interfaces: Considerations for design, which is a collation of the current accessibility guidance relevant to designing conversational AI for people with disability. They consider a wide range of disabilities including mental health issues, cognitive disabilities and sensory and physical impairments, and they categorise the guidance according to disability type. From that they develop a list of the key considerations when designing accessible conversational AI. Some of their questions are very similar to mine above. And they finish by plotting out a conversational system that they are developing and how they plan to answer some of these questions.
The answers will be different for everybody and will be affected by the strengths and limitations of the platform you’re working on. But it comes back once again to anticipating. Anticipating not just the conversational flow, but who the humans interacting with your system might be and what they will want and need.