UX Strat - Boulder, CO 2019

by Jon Fukuda


Our team recently had the opportunity to attend the UX STRAT 2019 conference in Boulder, Colorado. This conference brings together design leaders, strategists, user researchers, design-focused data scientists, and experienced design professionals to learn about the latest trends at the intersection of business strategy, user experience, product design, and service design.

The conference was loaded with three days of workshops and presentations from some of the best in the industry, including experts from Google, Spotify, Mozilla, Airbnb, Instagram, Uber, and more.


UX Strat 2019 Day 1: Workshops

The first day of the conference focused on dedicated UX strategy workshops in their morning there were concurrent sessions of:

  • Google/Paypal: Building a Product Strategy with the LEGO® SERIOUS PLAY® Methodology
  • 7-Eleven: Design Thinking for Strategic Alignment
  • Informatica: AI Design for Enterprise Products | Hands-on experience for UX designers

I chose the AI session to learn about how new trends in machine learning, AI, and natural language interfaces are changing the dynamics in human-computer interaction – and what strategies need to come into play when designing for this new paradigm.  The major areas of strategic consideration in AI design are:

Informatica, Ruth Tamari AI Design Principles

TRUST: Leading AI design strategists have come to understand is that humans are more likely to forgive each other than machines.  A critical factor in breaking the boundaries that govern trust and forgiveness in AI-driven interfaces is trust. Key ideas shared in the workshop are that trust is dynamic and must be managed, but also that mistrust is dynamic, can be maintained and must be managed.  If machines are making just-in-time recommendations to humans, they need to strive for algorithmic transparency – or how they arrive at their recommendations, the degree of confidence, sources of information, and decision matrices. People trust other people when they have shared experiences.  One way AI interfaces can generate a sense of shared experience is to follow users’ actions, to train and learn with the user.

CLARITY: Humans seek explanations to satisfy certain purposes or goals. Clear explanations enrich users’ mental models which in turn enhance performance as well as trust in AI.  Here is where we learned about xAI of the explainable artificial intelligence. How does it work? What does it achieve? What will it do next? What will it do if “X” is different? These are all factors that play into defining and clearly explaining the mechanisms and intent of the AI application.  Using local and focused explanations in clear and concise statements will help.

SIMPLICITY: Humans are constantly bombarded with big, bold, noisy, accentuating data.  Effective AI systems meet users’ needs while they co-exist and correspond with and ecosystem of products competing for attention.  There is a dynamic of “signal blindness” that can occur it the interface is always alerting when users are over alerted, they tend to mute the signal.  Leonardo DaVinci once said, “simplicity is the ultimate form of sophistication”. 

CONTROL: Humans are typically most comfortable when in control, “Black box” systems steer users away from their comfort zone and into unplanned interactions, confusing pathways, and unpredictable outcomes.  Examples of providing user control are functions like “undo/redo, allow editing, turning features on or off, asking for feedback. Always design the happy-path, but equally design for what can happen when the user fails.

HUMANIZE: Humans generally develop good rapport on the basis of common grounds.  To enhance user engagement, design interactions that are close to human behavior as possible.  In conversational design, this means putting the main data point at the end of a sentence. Efficient is no longer enough, look for ways to add enjoyment and delight in the user experience.  Humans operate in the behavioral dynamics of micro-interactions, so it’s in the details that make the experience more human, particularly when it comes to system feedback.

We put all of our lessons in AI strategy to test breaking into small teams to generate AI models to solve real-world problems.  One team proposed how they would use AI to help with tax preparation and filing. Another team proposed wearables, sensors, and machine learning to improve pet-to-human communication.  My team worked on an AI application to facilitate better patient-doctor communication before, during, and after consultations, evaluations, diagnosis, treatment, and outpatient instruction, and health management.  There were several other ideas all equally fascinating and novel applications of how to apply the principles of trust, clarity, simplicity, control, and humanization in the strategy of their solution.

UX Strat 2019 afternoon workshops included:

  • Google: Rapid Research Lab Framework: UX Insights and Testing With Consistency and Speed
  • Capital One: Laying a Foundation for Effective Design Teams
  • VMWare: Conversational UX Design for Artificial Intelligence

My focus on growing Limina’s design team drove me to the effective design team session.  A leading point in this session was that performance does not come from more money and more talent – it comes from a foundation in a culture that enables teams to do their best work.  Some key expressions of design culture are:

  • Allow teams to take informed risks
  • Allow teams to make honest mistakes and learn from them
  • Encourage creativity and innovation

Referencing the NY Times article: “What Google learned in its quest to build the perfect team” 

The session highlighted that making space for vulnerability mixed with a level of sincerity enabled by: 

  • psychological safety
  • good leadership
  • group emotional intelligence

… will lead to higher trust, participation, cooperation, collaboration, better decisions and increased creativity.  

NY Times Article: "What Google Learned From Its Quest to Build the Perfect Team"

The workshop component then broke the participants into teams based on our default communication styles and operational preferences.

  • North: Drivers – Get it done
  • West: Analytical – Discovering details, validating, testing
  • South: Empaths – Making sure everyone is comfortable
  • East: Strategists – Sees the big picture, not concerned with the details

This segmentation lesson drove home an increased awareness of operational preferences, allowing for greater empathy.  Each preference lends both strengths and weaknesses. When appropriately distributed on a team, a higher diversity of preferences can collectively achieve greater results.  The value in awareness of the strengths and weaknesses can lead to the optimization of the team as a collective – highlighting each member’s strength, offestting and minimizing the risk impact of their weakness of others.

Listen to Aaron Irizarry and Paul Bryan’s UX Strat Podcast. More UX Strat Podcasts at: https://uxstrat.com/podcasts/

Aaron Irizarry

Read Part 2: UX Strat Industry Leader Presentations