Jun 24, 2024
119 Views
Comments Off on Envisioning the next generation of AI-enhanced user experiences
0 0

Envisioning the next generation of AI-enhanced user experiences

Written by

Generative AI can help us create smarter interfaces that are more adaptive and helpful

Image generated with Adobe Firefly

Artificial intelligence (AI) has become integral to how we deliver digital experiences. From website recommendations to chatbots assisting customer service, AI systems have used machine learning (ML) to anticipate user needs and improve engagement — setting new standards for digital interactions. The digital landscape is evolving at an unprecedented pace, and with it, the expectations of users are reaching new heights.

For several years, our digital team at Red Hat has refined design patterns for data-driven experiences that leverage AI and ML systems for automated personalization. These experiences provide website visitors tailored content and offers based on their online behavior and user profile data. If a user shows interest (or the system predicts interest) in a specific Red Hat product or use case, those are highlighted. This helps users find what they need, and they feel recognized and valued.

Examples of design patterns focused on automated personalization and user profile-based experiences — via redhat.com

Before the rise in generative AI with ChatGPT’s launch in late 2022, our team discussions about AI focused on ML models that predict outcomes based on data and maintain a specific context. For example, models are trained on data from past user interactions to predict the success of serving new users the right predefined content at the right time — supporting precision marketing and customer engagement campaigns. Similarly, algorithmic chatbots have excelled at specific tasks within their programmed domains, but have followed predefined rules and patterns — operating based on fixed decision trees or scripted responses.

How generative AI is changing the conversation

Generative AI is transforming how users interact with digital platforms in remarkable ways, from enhancing personalization with automated content creation to conversational AI systems performing increasingly complex tasks that are not bound to a narrow context or single modality.

Generative AI is designed to create new data. It learns to generate new objects that resemble the data it was trained on. This shift to probabilistic systems introduces a new set of considerations for the UX design of AI-based services. Anish Acharya, a partner at Andreessen Horowitz, encourages us to think about the experiences we build as being subdivided into an architecture that’s organized around tolerance to probabilistic outputs — and even identifying areas that are uniquely enabled by or benefit from outputs that are varied and non-deterministic.

“A new computing architecture demands a new product architecture, and the probabilistic nature of LLMs and GenAI pipelines as a platform creates unique opportunities — and demands — that products and use cases be carefully considered in their tolerance to a dispersion of outputs. Product pickers and designers will do well to consider this carefully in designing experiences that leverage these new capabilities.”
Anish Acharya, Andreessen Horowitz

Using Generative AI to Unlock Probabilistic Products | Andreessen Horowitz

With probabilistic systems, designers must account for the variability and unpredictability of AI-generated content, ensuring that user interfaces can adapt to and handle a wide range of outputs seamlessly, stay aligned to brand standards and guidelines, and provide the transparency and controls both users and brands need with AI services that create a dispersion of outputs.

Examples of disclaimers in early, non-deterministic GenAI services — via Figma top left, Microsoft Copilot top right, MongoDB bottom

As highlighted by MIT News, generative AI’s ability to create new data rather than just making predictions opens up innovative possibilities across various fields, from creative industries to complex problem-solving in technology and science. Large language models (LLMs), trained on extensive datasets, offer unprecedented capabilities for developing human-friendly interfaces and revolutionizing numerous disciplines.

“The highest value they have, in my mind, is to become this terrific interface to machines that are human friendly. Previously, humans had to talk to machines in the language of machines to make things happen. Now, this interface has figured out how to talk to both humans and machines.”
Devavrat Shah, MIT

Explained: Generative AI

The next generation of user interfaces

The future of UX lies in creating adaptive, intelligent interfaces that provide a tailored experience beyond the personalized content delivered with algorithmic recommendation engines.

A glimpse into this future can be seen in Google Gemini’s Bespoke UI experiment. This initiative is pioneering a conversational, generative experience where users interact with custom interfaces in a dynamic, context-aware manner. It signifies a shift towards more intuitive and responsive interfaces that adapt to user inputs and that surface functionality as needed.

https://medium.com/media/c23c1644b71c8d14f11cf7652935690b/href

Gemini’s Bespoke UI demo shows an advanced AI service that understands user needs and delivers personalized, immediate responses including contextually generated code for layouts and interactive elements. This type of on-demand interface is the next generation of digital experiences, where AI-assistants will provide users with seamless multimedia responses and focused controls that feel natural and cohesive.

Brian Fletcher, Global Chief Technology Officer at Huge, describes this shift as “Real-time UIin his SXSW 2024 presentation — where advancements in GenAI and natural language processing (NLP) will bring true adaptivity of presentation, functionality, and content to user interfaces. Fletcher shares a vision of generative AI allowing us to move away from linear, carefully orchestrated user journeys and embrace open ended experiences that are more human, intimate, and helpful. With the ability to support every user path and even create new ones in real time, generative AI will transform the foundation of user experience. However, it’s essential to experiment with this technology to fully harness its potential.

“Generative AI has far more to offer than generic text, image, and code generation. It has the potential to completely reinvent digital experiences. For it to pay off on its promise, however, designers and developers will have to undertake a fundamental mindset shift. They will need to experiment and break down our current mental models in order to invent the future.”
Brian Fletcher, Huge

Real-time UIs: The Future of Human-Computer Interactions

Unlike traditional static interfaces or algorithmic recommendation systems, imagine the possibilities with real-time UI:

In-Conversation Flow

Real-time UI that allows for a fluid interaction flow where only the necessary interface elements are brought into focus based on the current context of the user’s interaction. This reduces cognitive load and enhances user efficiency.

Example: In a customer support application, when a user starts typing a query, the interface can dynamically adjust to display an easy to scan layout of relevant FAQs, live chat options, and related documentation without requiring scrolling through chat text or additional navigation.

Contextual Adaptation

Interfaces can adapt to the user’s environment, device, and even emotional state, providing a more personalized and relevant experience.

Example: A chat-based learning guide that can change its interface based on whether the user is accessing it from a desktop in their office or a mobile device while traveling, offering the most pertinent features and multimedia content for each scenario.

Proactive Assistance

Real-time UI that can anticipate user needs and proactively offer assistance by displaying relevant functional components, reducing the effort required to complete tasks.

Example: An AI assistant that can detect when a user is frequently accessing specific reports and suggest creating a custom dashboard for quick access.

Embracing GenAI in Design teams

Real-time UI represents the paradigm shifts generative AI will bring to UX Design. But a staged approach to leveraging generative AI in Design teams will be needed, starting with improved efficiency in how we build the digital experiences we already deliver today. Initially, generative AI-powered tools can streamline design workflows and enhance team productivity. These first use cases will lay a strong foundation for generative AI standards, governance, and team proficiency.

Design teams can evolve by developing AI use cases and establishing feedback loops, each targeting different facets of AI integration and skill development. Here are two starting points:

1. Enhancing Design Team Efficiency

Objective: Improve the efficiency and productivity of the Design team through AI-powered tools and processes.

Key Actions:

Adopt AI Tools: Integrate AI-powered design tools that automate repetitive tasks, such as graphics generation, user research or UX hypothesis analysis and quality testing.Expand Enablement: Integrate generative AI capabilities into tools and workflows that empower roles outside of the Design team to contribute to experience design and experience management in new waysTraining and Upskilling: Provide training sessions on using AI tools effectively. Encourage team members to explore AI-enhanced design techniques and workflows.

Skills and Approach:

Skills: Familiarity with AI tools, model types and training, data analysis, and basic understanding of machine learning concepts.Approach: Emphasize efficiency, productivity gains, and speed to market. Encourage experimentation with AI-enhanced design processes to streamline workflows.

2. Exploring End-User Services that Incorporate GenAI

Objective: Investigate and develop end-user services that leverage GenAI to create more personalized and adaptive experiences.

Key Actions:

AI Service Prototyping: Design and prototype new services that utilize GenAI to enhance user interaction, personalization, and adaptability.User-Centric Design: Conduct user testing and gather feedback to refine these services, ensuring they meet user needs and preferences.Governance and Observability: Establish robust governance frameworks and observability systems to monitor generative AI performance, ensuring alignment to intended use, brand standards and compliance.AI Experience Integration: Integrate GenAI capabilities into existing digital user flows to improve their responsiveness and relevance.

Skills and Approach:

Skills: Proficiency in AI service design, experience with AI integration, and strong user testing methodologies. Additionally, knowledge of governance and observability practices for AI systems.Approach: Emphasize user-centricity and innovation. Focus on creating services that adapt to user needs in real time, enhancing overall user satisfaction and engagement. Ensure AI systems are transparent, ethical, and accountable by implementing comprehensive governance and observability frameworks.

AI model design and training

What’s truly exciting is the transformative digital experiences we can invent as we develop confidence in AI systems’ ability to directly build adaptive interfaces for end users. This will require generative models that are trusted to anticipate user needs, provide proactive assistance, and create a seamless, intuitive interaction flow. This transition will not only enhance current capabilities but also pave the way for a future that delivers unparalleled user satisfaction.

Knowing that AI models will be central to how digital experiences will be delivered in the future, I firmly believe that design teams must be involved in the model training process. Our insights into user behavior, preferences, and pain points are crucial for creating AI models that truly understand and meet user needs. This involvement ensures that the models are not only technically proficient but also aligned with human-centric design principles.

Custom model training will need to become a new loop in the design lifecycle. Just as we iterate on design prototypes based on user feedback, we must work with data teams and AI tools as Model Designers — continually refining AI models to better understand and predict user behavior and design principles. This iterative process will allow us to create more responsive and intuitive interfaces, ultimately leading to a more satisfying user experience. By integrating custom model training into our design team workflow, we can ensure that AI-driven solutions are innovative, consistent with holistic experience and brand design strategies, and user-centric.

AI-enabled design lifecycle — diagram by Rob Chappell

Moreover, the design systems we produce are already a form of computational design, serving as a separate design cycle that standardizes and streamlines the experience delivery process. They provide a structured framework of reusable components, guidelines, and patterns that ensure consistency and efficiency across different projects and platforms. Similarly, AI model training and management will be added as another enablement cycle to the experience delivery process.

AI models might be combined with our design systems, enhancing their capability to enforce consistency. This integrated approach can ensure that every aspect of the user experience is optimized, from the foundational design elements to the intelligent, responsive interfaces powered by AI.

The importance of AI literacy

John Maeda, VP Head of Computational Design and AI at Microsoft, highlights in his Design Against AI 2024 report the necessity for designers to understand the fundamentals of computational design and prepare for AI-induced shifts in design careers. Maeda also discusses the rapid transformation of work due to AI, noting the “sprint not a marathon” mentality that professionals must have to continually adapt in order to stay relevant in this evolving landscape. Additionally, he encourages critical evaluation of AI — especially concerning fairness and inclusivity — as well as Design’s essential role in demonstrating responsible AI practices to customers of branded AI services.

https://medium.com/media/2a663d13095cff06f423e130e636969a/href“AI is pretty hard to understand. You can be deep in it and discover it changed, like yesterday. Anyone who calls himself an AI expert, be a little skeptical because it’s hard to be an expert in something that changes everyday.”
John Maeda, Microsoft

Maeda’s Design in Tech reports provide a thought-provoking perspective on how design impacts technology and vice versa. His openness to share insights, trends, and patterns — informed by a multi-decade career working in AI and Design — is inspiring and lifts the design community toward progress.

The pace of technological advancement in AI will require us to revisit and adapt our design principles constantly. By strategically evolving our approach through stages of AI use cases and increased AI literacy, we can lead the way in creating adaptive, intelligent interfaces that meet the sophisticated needs of our users. Generative AI presents a significant opportunity. It’s a lot to take in, but it’s also an exciting chance to shape the future of user experience.

Envisioning the next generation of AI-enhanced user experiences was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Comments are closed.