User Researcher
Natural Language Processing for Analytics

Using natural language to control filters and groupings
​
Context and Goals
One of the goals of Einstein Analytics was to democratize data and make it accessible to end users who don't have technical skills that analysts often do. This is still an unrealized ambition as computational power hasn't brought us to the future of easy to use analytics. But one step forward in that direction is the use of natural language querying (NLQ) to make it easier for end users to explore their data.
​
The goal of the research was to understand end user expectations and gauge the learnability of the feature. It was also a opportunity to provide the documentation team feedback on their copy text to see how helpful it is.
In-person observation and testing with internal employees
We originally thought end user would want it in the dashboard
Breaking down each aspect of a query into a formula
Persona
The End User to make the analysis easier for them and possibly the Analyst as they are also active users of the product.
Method
Initially we only had static screen for concept feedback so we used that opportunity to gauge expectations and not test usability. After getting a working prototype, we did remote user testing and in-person observation with customers and internal employees.


Insights
What we found from the concept validation was that users either had very high expectations or very low ones. Participants familiar with natural language processing (NLP) were skeptical that a conversational tool would actually work. Participants who didn't have experience with NLP features had full faith in the product and expected something out of Star Trek.
Despite these difference, both wanted it to be able to understand their questions (e.g. “How am I doing?”) without explicitly providing their context (e.g. who they are, what they do, and what metrics they want). Participants also didn't know what to ask. This was because they were unfamiliar with the data and even if they did come up with something the syntax wasn’t perfect and it’d fail.
This frustrated users as there was no error handling so users thought it was either broken or their question was incorrect.
The solution we came up with was adding suggestions in the text field drop down, adding error handling, and in-context documentation to provide more examples to the user. I also explained the limitations of the product before hand to temper their expectations.
Levels of Help
One of the new questions that arose was: how much information do users need to understand how to use the product? A short description? The error handling and suggestions that appear when using it? The full documentation? To answer this I decided to provide the participants hints after a certain number of failed questions. I created four different levels of help. In the end we found that the release notes led to the most successful questions but a rather bland experience.

Testing what amount of help is needed before a question was successful
Creating a "Formula"
The last iteration of testing I decided to create a formula that breaks down the different parts of a successful question. When I provided this guidance to the user while doing the query, task completion was much better.

It seems obvious that this would work but what it shows is that while technology is still catching up we have to be honest with our users. We need to provide guidance and error handling so they know what they to do to be successful. User experience heuristics, like transparency, can fill the gaps that the technology has left in order to avoid user frustration.