How designers can lead the way in the effort to create AI for humans.
Illustration by Andrea Grigsby“Either you can be part of the change, or the world can be designed without you and then you have to fit into that change.”
— Brian Chesky, interviewed by DeZeen
The rapid rise of Artificial Intelligence (AI) comes with ethical and societal concerns. These issues, ranging from biases in datasets to the potential for mass job displacement, feel increasingly inevitable as AI becomes more integrated into our daily lives. But hope is not lost. Human-centered AI — AI that is fair, transparent, and beneficial — is possible when designed with intention.
As UX professionals, we are uniquely positioned to create AI systems that complement and enhance human abilities.
A quick note on AI literacy
To influence AI product development, designers need a foundational understanding of AI. We don’t need PhDs in machine learning, but we must be informed practitioners who know how it works and what it can do.
Learning about AI is like learning to invest. At first it seems incredibly daunting — wtf is an ‘ETF’. Yet, as abbreviations and processes become familiar, it becomes demystified. That said, there’s still a difference between setting up recurring investments in a Roth IRA and being a stockbroker at JP Morgan (or so I’m told). Just like you don’t need to be a stockbroker to invest wisely, you don’t need to be a computer scientist to create AI-powered tools effectively.
AI, broadly, refers to a computer’s ability to mimic human thought, while machine learning (ML), a subset of AI, allows systems to learn from data.
Unlike traditional programming, where every function is explicitly coded, ML models learn patterns from training datasets. Once trained, these models generate outputs based on inputs, functioning as a “black box” where the process is quite opaque.
Illustration by Andrea Grigsby of the black box nature of AI
Human interaction occurs at these input and output stages. Take three different uses of AI for example:
Fraud Detection (ML):
Input = transaction data, Output = fraud determination.Health Screening (ML, Computer Vision):
Input = image, Output = analysis or measurement of health.Conversational AI (ML, Generative AI, Natural Language Processing):
Input = text prompt, Output = generated response
There are many online resources that can help designers grasp these fundamentals, adding a vital tool to our problem-solving toolkit.
Design-led AI
In ideal circumstances, designers are involved in product development at the ground floor of AI development, starting at the discovery phase. This is where we evaluate whether AI is even necessary for our product to meet its intended goals.
This is our first point of impact: evaluating AI with the same scrutiny as any other tool, focusing on user needs rather than the flashiness of the technology.
Here are some helpful questions to keep AI solutions grounded in user needs:
Is AI the right solution for the identified problem?How might the model be integrated into existing user workflows?Can existing workflows provide the necessary data for a model in a timely manner?How will model outputs need to be presented for it to be an effective solution?
This is why having a basic understanding of how AI works is important as it’ll help us make informed judgments about whether it is truly a suitable solution.
Ultimately, engaging early ensures AI solutions adapt to users — not the other way around.
Basic AI literacy also allows designers to collaborate effectively with cross-functional team members like ML engineers. Ultimately, engaging early ensures AI solutions adapt to users — not the other way around. Discussions of risks and ethical considerations can provide an opportunity to factor in evaluations in the product development cycle.
Some of you might be thinking “that’s nice, but way too idealistic — design gets brought in after other stakeholders have already decided to use AI”. What then?
Design-guarded AI
When AI solutions are predetermined, designers can still influence outcomes by focusing on inputs and outputs, the two areas where users interact directly with AI.
Illustration by Andrea Grigsby
Input design
Inputs depend on the type of AI. Regardless of format, the input collection method must be intuitive and user-friendly. Applying established design principles — especially like affordances — can make user actions clear and effortless.
Good input design makes the input actions incredibly straightforward to understand. Let’s revisit the three examples:
Fraud prevention inputs: Automate data collection where possible to streamline workflows and minimize user effort.Health screener inputs: Provide clear instructions for capturing the right images and offer real-time feedback when adjustments are needed.Illustration by Andrea Grigsby of a workflow using a camera feed to capture the input for an ML product.GenAI inputs: Provide a specific set of possible commands to help users understand what can be done with the product. As product designer Maggie Appleton put it, “we should make tiny, sharp, specific tools with models.” This helps reduce paralysis of the blank canvas text input field by equipping users with options to choose from.Screenshots comparison of the AI function in Notion: old version (left), current version (right, as of Nov. 2024)
User research provides insights on workflows, pain points, and expectations. By leveraging these learnings, we can design streamlined experiences, automating any repetitive tasks to save users time and energy. Moreover, frequent testing in tight cycles of iteration uncovers friction points early, allowing for continuous refinement and the creation of truly user-friendly human-AI interactions.
Output design
Similar to designing inputs, we do not have to reinvent the wheel when it comes to the design methodologies we employ in designing outputs. Human-computer interaction considerations are translatable to human-AI interactions.
The most important thing to design for in the outputs of AI is to highlight the fallibility of AI and reduce automation bias. As of right now, AI is far from the arbiter of truth it can so easily seem to be.
Illustration by Andrea Grigsby of the fallibility of AI.
The key question here is to determine how to communicate potential AI flaws to users. The exact design will depend on the context but here are some potential avenues to consider in relation to the three examples:
Fraud detection outputs: Focus on delivering critical notifications only, avoid overwhelming users with unnecessary alerts. This prevents alert-fatigue — i.e. when people become desensitized after being exposed to an overwhelming amount of alertsHealth screener outputs: Display confidence scores or uncertainty metrics. For example, “80% confident” draws attention to the model’s potential limitations, encouraging users to apply their expertise.GenAI outputs: Include references or citations to allow users to verify information. Avoid presenting outputs as definitive.By positioning AI as a tool — not an authority — users are empowered to make informed decisions.
Finally, think systemically: our interactions with AI tools don’t end at the point of the output. What next actions do users need to take? How might we craft solutions that facilitate a supported workflow from start to finish?
Another quick note on ethical considerations (that really should be a longer discussion)
No discussion of human-centered AI is complete without addressing ethics. Designers must champion transparency, fairness, and inclusivity.
Questions of bias, data privacy, and unintended consequences should be raised early and revisited often throughout the product lifecycle. Building time into the development process for evaluation isn’t a luxury — it’s a necessity.
These will be tough conversations to have as oftentimes they run directly against potential company profits, but that makes them all the more necessary.
Kat Zhou’s <Design Ethically> framework is a great starting point for evaluating the intention of products, forecasting potential outcomes and monitoring any issues that may arise.
Kat Zhou’s <Design Ethically> framewowrk — a redesigned design thinking process to include ethical considerations. New steps: Evaluate, Forecast, Ship, Monitor.
Final thoughts
As Brian Chesky mentioned in the same interview, “The best chance is for the most creative people, the humanistic people [to be] in charge, participate in what appears to be an inevitable revolution.”
UX leaders should aim to be decision-makers in when and how AI is employed. If that’s not possible today, start by crafting human-centered AI designs and developing a long-term product vision. Build influence so that, in the future, you can pull up a seat at the proverbial table and guide decisions from the start.
AI won’t solve our problems on its own. We must actively shape it to meet real human needs in a fair and responsible way. Let’s lead the charge.
Hope is not lost for human-centered AI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.