BROWNHOME: A PERSONAL ASSISTANT DEVICE
researching user-centric conversational interfaces with Google Home
creating A MORE UNDERSTANDING VOICE INTERFACE
Voice-activated smart home devices presently suffer from a great depth of untapped potential, particularly with regard to facilitating everyday life in an accessible and socio-locally contextual manner. Members of the Brown community have a similar set of campus-specific needs and wants, but no single gateway exists for addressing them all in one place. To this end, our team (Dan Wang, Ting Xia, Beverly Tai) has designed and developed BrownHome, an Action within Google Assistant for facilitating student life.
The product of extensive research into existing APIs and multi-faceted user studies in the modes of natural language commonly used in conjunction with university resources, BrownHome enables students to verbally request dining information, shuttle arrival times, availability of laundry utilities, and on-campus events through an integrated gateway.
I was responsible for designing and conducting our user research interviews, in order to better design natural conversations with the device.
PRELIMINARY USER RESEARCH
We conducted an initial survey on 43 Brown students: 39 undergraduates, and 4 graduate students.
In our survey, we assessed student's basic conversation patterns and sentence phrasings. We asked varying questions and prompts of how the respondents would interact with such devices.
Alternating between asking respondents how they would phrase questions and giving prompts/commands helped us differentiate conversational speech patterns from action tasks (i.e. asking, “What's DPS' phone number?” versus typing “DPS phone number” into a search engine).
This allowed us to mark similar patterns of speech across our respondents’ answers. We noticed that answers were more task-based: “Brown University DPS number” vs. “What’s happening on campus this weekend?”, but not conversational: “Hey Google Home, I have some free time this weekend, and I was wondering if you knew of anything happening.”
In all, there were several things we gleamed from our survey, such as differentiating the linguistic methods of addressing the Google Home (task-based vs. conversational responses). However, in proceeding with our user research interviews, we wanted to address and work to minimize the problem of creating a potential bias among respondents’ answers, as their answers might mirror some of the language in our survey. In moving forward, we included more open-ended questions in our interview.
We conducted preliminary user research interviews to observe users’ interactions while communicating with the Google Home. We interviewed in total 9 demographically diverse respondents. The individuals we interviewed all possessed varying levels of experience with smart home/ personal assistant technology.
Our goal in executing these primary user research interviews was to observe any conceivable differences between physical and digital interactions with the Google Home.
In order to minimize framing bias, we first asked our interviewees to imagine their own use-cases, or scenarios in which they would need to communicate with a Google Home.
We instructed each participant to directly communicate with the Google Home, in regards to executing one of their previously mentioned use-cases. After the user’s initial interaction with the Google Home, we then asked the user to rephrase their question in the scenario that their task was misheard, or not understood.
Answers provided by respondents shared some similarities: many commented that often times, looking up the dining hall menus was a confusing task (as no single portal currently exists to compare food options between each of the dining halls), while others noted the importance of the potential application of the device for accessibility purposes and hands-free communication.
CONCEPTUAL USER MODELING
Through our research, it was clear that we needed to optimize for better UX conversation flow for all users, because qualities of manually looking up information are more natural than interacting with the Google Home. However, this is challenging as people speak differently than how they write. Written communication currently lends itself better to digital information retrieval, and spoken imprecision is not only tolerated, but in fact encouraged when we all belong to the same society.
Our solution to this problem is socio-local contextuality. Beyond simply knowing geographic locations, smart devices need to understand where those locations fit into the context of human society (and vice-versa).