Mar 4, 2025
20 Views
Comments Off on AI transparency in UX: Designing clear AI interactions
0 0

AI transparency in UX: Designing clear AI interactions

Written by

Users need more than a sparkle icon and chat-bot to designate embedded AI.

As AI is integrated more and more throughout website and app experiences, it’s critical to distinguish where AI has been implemented from where it has not.

Initially, most products introduced AI as a chat-bot where users initiated and facilitated their interaction with AI. Now, products are merging AI into dashboards, tasks, and search functions. Users are no longer initiating their experience with AI–it’s pre-existing.

Since users no longer control when they trigger usage of AI, users need to be made aware of when they’re shown AI features or content to determine its validity and quality. Not only that, the European Union AI Act (applicable in 2026) will enforce that users are made aware when they communicate or interact with an AI system.

This is where design systems come in–implementing specialized visual treatment to consistently separate AI content and features from non-AI content and features.

Google’s Material design system documentation

Unfortunately, only a few open-source design systems have explicit AI components and patterns today. I’m hoping more will be incorporated soon, but so far, only GitLab’s Pajamas, IBM’s Carbon, and Twilio’s Paste acknowledge AI in their guidelines.

Note: I use Design Systems for Figma to benchmark AI components and patterns. I also did not include design systems that only include documentation for AI chat-bots or conversation design since it’s a more standard interaction pattern; this includes Amazon’s Cloudscape and Salesforce’s Lightning.

Let’s compare and contrast these design system AI components and patterns and see where they can be optimized for better usability.

GitLab’s PajamasIBM’s CarbonTwilio’s Paste

1. GitLab’s Pajamas

Pajamas currently doesn’t include explicit components or patterns, but it does include interesting documentation about AI-human interactions. The documentation first recommends understanding if the usage of AI will actually benefit the user by identifying when it’s ethical and beneficial to automate (I.E., high-risk vs. low-risk tasks).

Next, it recommends being transparent about where AI is used–Pajamas does this with its “GitLab Duo,” an indicator of AI-features, capabilities, and limitations.

GitLab Duo is used to indicate where the user can interact with AI in the interface

Since the “GitLab Duo” is used for AI-features and interactions (and not any AI content), Pajamas also recommends flagging AI-generated content with “<Verb> by AI” (I.E., “Summarized by AI”), as well as a message encouraging users to check the AI-content.

GitLab is also working on a framework to practice their guidelines; it’s currently in-progress, but the general work can be viewed in GitLab’s AI UX Patterns. Their goal is to release an AI-pattern library with documentation–just what we need (pleaseee!).

GitLab’s vision for their AI UX patterns is split into 4 dimensions to help select the right AI pattern: Mode, Approach, Interactivity, and Task.

Mode: The emphasis of the AI-human interaction (focused, supportive, or integrated)Approach: What the AI is improving (automate or augment tasks)Interactivity: How the AI engages with users (proactive or reactive)Task: What the AI system can help the user with (classification, generation, or prediction)

For example, their early explorations for AI patterns include low-fidelity mockups of how AI can be integrated in an interface with charts or inline explanations. The patterns clearly mark the usage of AI to help build user understanding and trust with the AI system.

Lo-fi integrated chart with markers indicating AI, such as predicted data (via GitLab’s Vision for AI UX)Lo-fi integrated explainer to fill out a form with AI (via GitLab’s Vision for AI UX)

Verdict

Currently, GitLab’s documentation is conceptual and generalized to how they want the AI UX experience to be like in the future. But it gives a solid framework that most design systems could adopt–no matter the industry or product.

I’m hopeful they release more in-depth information about their AI UX patterns soon. I think it could be a great open-source asset to other design systems developing their AI documentation.

2. IBM’s Carbon

Out of the open-source design systems, Carbon has the most robust documentation for AI usage. It includes an AI-dedicated section, “Carbon for AI,” which encompasses components, patterns, and guidelines to help users recognize AI-generated content and understand how AI is used in the product.

Carbon for AI builds on top of the existing Carbon components–adding a blue glow and gradient to highlight instances of AI. So far, there are 12 components with AI variants, such as a modal, data table, and text input.

Carbon for AI’s component list with specific AI variants

Though the AI variants of the components are given a distinct visual treatment, in context, it’s difficult to distinguish which component is currently active (because they all look active).

In the below form, AI was used to auto-fill most of the input fields, so these fields use the AI-variants. The AI-variants receive a blue gradient and border even if it’s in a default state–making it hard to visually identify which component is active.

The blue gradient and border used on AI-components makes it hard to tell which component is active

Users can override inputs made by AI, which will swap the AI-variant for the default-variant of the component. This will cause a “revert to AI input” action to replace the AI label in the input field–allowing users to control manual or automated form responses.

Carbon’s “revert to AI input” function appears when users override AI-input

In addition to the AI-variant, it includes an explicit AI label that can display a popover explaining the details of AI in the particular scenario (Carbon calls this pattern “AI explainability”). The user can select the AI label and the popover appears beneath the button.​​

Carbon’s AI label includes an explainer popover for the user to get more details on the usage of AI

Verdict

It’s exciting to see design system documentation on AI patterns and components that’s as well-developed as Carbon’s. Not only do they have documentation on the general usage of AI, but actually have components and patterns to use.

But since the AI-variants of the components make it difficult to distinguish which component is active when used in-context, I think there are usability and accessibility issues. The AI-variants draw too much attention with the color usage, and also look like Carbon’s focus state (which could impact low-vision users reliant on the focus state).

Carbon’s AI-variant vs. focus state of the text field

3. Twilio’s Paste

Lastly, Paste offers an “Artificial Intelligence” section under their “Experiences” section. Paste includes general documentation on using AI in user experiences, as well as a few components to use.

When designing AI features, Paste recommends allowing users to compare AI outcomes to their current experiences, as well as handle potential errors and risks. To mitigate these errors, Paste advocates for giving the user the ability to review and undo outputs, control data sources, and give feedback to the AI system.

Paste also suggests asking yourself, “How would I design this feature if it did the same thing but didn’t use AI?” when designing a new AI feature. Users don’t use products just so they can interact with AI–they’re trying to complete tasks and achieve goals as efficiently as possible.

Paste includes an AI UI kit with 5 components: artificial intelligence icon, badge, button, progress bar, and skeleton loader. It also includes components specific to their AI chat experience, such as the AI chat log.

What’s most helpful in Paste’s documentation is the examples they provide. This includes signposting, generative features, and the chat.

For signposting, Paste suggests using the decorative badge with the artificial intelligence icon to indicate a feature is using AI, such as AI recommendations or predictions. The signposting is non-interactive, but resembles a button, so it looks clickable.

Paste’s signposting example using a badge and AI icon

The generative feature gives users prompts to help them use the AI feature, such as “Summarize the data” or “Recommend the next step.” When you select the generative feature, a popover appears below giving the user instructions and what AI model it’s using.

Paste’s generative feature includes a button with a popover to instruct the user interacting with AI

Lastly, the chat is pretty typical of AI chat-bots known today, and includes references to their conversational principles to develop the AI’s personality.

Paste’s AI chat-bot with an empty state and prompts below the text field

Paste does have another pattern coming soon with the loading pattern, but we’ll have to wait and see. This pattern will give users a way to control and anticipate the AI output; this includes stopping the output and adapting the state based on how long the AI output will take.

Verdict

I’m happy to see a mixture of some documentation with real examples we can look at. Though one of the examples is a chat-bot, the other components in the AI UI kit demonstrate how to be transparent when showing AI-usage in an interface.

Paste is looking for feedback on their AI UI kit–they have an open Github discussion where you can submit requests.

It’s surprising how few design systems have released documentation on components and patterns to address AI-driven content and features (at least publicly). For instance, both Google and Microsoft are leaders in the AI industry, but open-source Material and Fluent design systems don’t include AI patterns.

Since these AI leaders are integrating AI into common products a broader user group interacts with (like Gemini and Copilot), they’re establishing the user’s mental model that other products need to also adopt. Even Adobe’s Spectrum, who has integrated AI into many of their products (Adobe Firefly), only has a short blurb acknowledging machine learning and AI when it comes to content and writing about people.

Maybe their AI patterns are still in development? Maybe they’re waiting to get it right?

Either way, it’s valuable and crucial to identify AI features and generated content to users, so they can better understand what’s being shown to them, as well as trust the product. I’m looking forward to more design system patterns that go beyond the sparkle icon and the chat-bot.

Stay tuned!

AI transparency in UX: Designing clear AI interactions was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Comments are closed.