Jul 29, 2024
94 Views
Comments Off on The expanded scope and blurring boundaries of AI-powered design
0 0

The expanded scope and blurring boundaries of AI-powered design

Written by

An exploration of what the rapid integration of generative AI means for how we design and develop software.

In March 2023, I wrote an article exploring the initial impact of ChatGPT on the design industry. At that time, we were just beginning to grasp the potential of generative AI technologies. Now, reflecting on the changes we’ve seen, it’s clear that the landscape for designing and developing software has shifted seismically. I’m struck by how dramatically our conversations have evolved in such a short span of time. Tools like ChatGPT and AI-powered features have become embedded into everyday workflows, making products smarter and enabling us to do and achieve more.

We’ve witnessed an unprecedented flurry of AI announcements, integrations, and pivots from tech giants and startups alike. Microsoft has bet big on OpenAI, Google scrambled to launch Bard and stumbled again with Gemini and Meta launched their own proprietary and open source models. Apple also launched their take that rebranded AI, with some notable differences.

Examples of recent challenges companies have faced when launching AI features.

Amid this AI gold rush, many of us find ourselves grappling with questions that have only grown more complex over time: How will these technologies reshape our roles? What new skills do we need to develop? And perhaps most pressingly, where is all of this actually going?

In this follow-up article, I revisit some of my earlier assessments, explore the current state of AI in design and software development, examine some telling parallels with other technological revolutions, and offer an updated perspective on how we, as designers working in product development, can navigate this rapidly evolving landscape.

More profound than electricity or fire?

Soon after ChatGPT launched, there were some very bold claims about the impact LLMs would have. Sundar Pichai (Google CEO) claimed it will “be more profound than electricity or fire” while others have, more recently, poured cold water on the idea. Goldman Sachs has questioned the economic viability of generative AI, pointing to how the approx $1trn spend has “little to show for” so far. So are we any clearer about which it is? There are some parallels we can study to attempt to understand where we are right now.

One of the most important things to humanity…more profound than electricity or fire.
 — Sundar Pichai

The history of predicting when full self-driving (FSD) cars would be ready provides an interesting reference point for understanding where we are with generative AI today. Despite significant advancements, achieving full autonomy remains challenging, mirroring the current state of generative AI — promising but still facing substantial hurdles.

By the middle of next year, we’ll have over a million Tesla cars on the road with full self-driving hardware.
Elon Musk2019

It’s not that self-driving cars didn’t materialise — you can go to San Francisco today and see Waymo FSD taxis on the roads. However, optimizing for one environment and generalising for an entire region or country are two very different problems. Achieving 90% FSD is somewhat clear and attainable, but closing the final gap is where the real complexity and hard work lies. Beyond the technical challenges of ensuring FSD works reliably on various types of roads and accommodate the myriad edge cases of human behavior, there are significant regulatory challenges that compound the complexity that hinder progress.

Waymo self-driving cars are disabled by protesters with traffic cones that confused their sensors. Image is a screengrab from TikTok / Safe Street Rebel

So what does this have to do with generative AI? Over the last 18 months, much like the recent wave of self-driving car innovation, companies have poured huge budgets and resources into generative AI. Many teams were tasked with creating POCs (proof of concepts) to illustrate what integrating LLMs and other generative AI models might mean for their products, services, or industries. People in boardrooms got excited by demos, and some companies rushed results to market, only to fall flat. We are now witnessing what Aidan Gomez calls the “POC death cycle” where companies struggle to transition from experimentation to deploying models in production.

There have been several high-profile examples that highlight this challenge. Most recently, we saw Figma backstepping after they launched a new (since pulled) AI feature called ‘Make Designs’, a prompt driven interface used to create new screen designs that generated designs for a weather app that had uncanny similarities with Apple’s version. Google has also continued to face more trouble with its overhaul of search that advised users to put glue on pizza or eat rocks. These incidents demonstrate that, much like self-driving technology, moving from experimentation to production is very difficult, even for companies with significant resources and top talent.

Comparison of Apple’s weather app alongside a generic app created by Figma’s new feature, which has since been pulled. Image courtesy of Andy Allen.

Post-peak generative AI

So where do we go from here? Well according to Gartner, in August 2023, we are now at or already past the peak of inflated expectations for generative AI—where people are excited and optimistic about future applications, with few of the downsides. The far side of this however the ‘trough of disillusionment’. This phase often sees a decline in enthusiasm and a more realistic understanding of the technology’s limitations. This will perhaps be accelerated by the previously mentioned false starts we have seen recently. However, it’s also a crucial period where more practical and sustainable uses of generative AI begin to emerge.

2023 Hype Cycle for Emerging Technologies, Gartner.

Some companies are already taking a different approach to how they adopt AI technologies. Apple has very intentionally avoided using the term, with their recent keynote focusing instead on ‘Apple intelligence’. Their ‘AI’ offerings were also telling — instead of peppering features across the experience, they took a very measured approach, focusing only where they believe it adds significant value to the user experience—and dialing it back where there is greater risk, such as with image generation. This careful integration highlights a strategic shift towards enhancing functionality without overhyping the technology. They are also integrating ChatGPT across the platform for certain queries, meaning that if things go wrong, the error is on those third parties and not Apple.

Apple’s AI Bitmoji creates images on demand but limits them to intentionally cartoonish outputs, managing expectations and the potential for misuse in the process. Image credit Apple.

This measured approach is indicative of the next chapter, where AI transitions from being the central feature to becoming more seamlessly embedded within experiences. Instead of AI being a headline-grabbing feature, it will be a powerful background component that enhances user interactions and productivity. This shift towards subtle, integrated AI reflects a maturing understanding of the technology’s strengths and limitations, ultimately leading to more reliable and user-friendly applications.

Considering the impact generative AI will have on how we design and develop software, it’s clear that this technology is here to stay in some form. Teams and individuals who aren’t yet experimenting with how these technologies will disrupt their products, services, or workflows should start doing so immediately. The shift is actually already underway if you know where to look.

Supernormal AI

In product design, the concept of “supernormal” refers to creating products that feel immediately familiar and comfortable to users, despite being new or innovative. This approach emphasizes subtlety and refinement rather than overt novelty. The goal is to design objects that blend seamlessly into everyday life, offering a sense of reliability and timelessness. This concept was championed by designers like Naoto Fukasawa and Jasper Morrison in the early 2000’s.

Supernormal exhibition, Axis Gallery Tokyo 2006. Image copyright Naoto Fukasawa Design.

This idea of “supernormal” design provides a useful metaphor for how AI experiences might evolve. As AI features become more commonplace and users grow familiar with how they work, the need to highlight them will decrease. For example, the sparkle icon has become a defining feature of many AI features within products. Initially, signaling AI functionality to users was necessary, but this will evolve to become more intuitively useful and naturally integrated into users’ daily routines, enhancing their experience without drawing attention. Jordan Singer, who leads AI design at Figma, discussed this integration:

The sparkle icon has become ubiquitous for AI, we had many debates early on. I said we shouldn’t use the sparkle at all because AI should feel really deeply integrated. But I think we want to ease our way into people learning about our new capabilities, making sure it is recognizable.
Jordan SingerAn example button (credit Edoardo Mercat) with the now ubiquitous sparkle icon that indicates AI functionality.

It’s worth remembering that there was a time when Apple had to explain how swipe to unlock and pinch to zoom worked. Now, these features are so commonplace that it seems strange they ever needed explanation. Similarly, as we learn these new AI affordances, the obvious indicators may start to disappear as AI features become more deeply embedded into product experiences.

https://medium.com/media/1cd9c80a63ec4dead1881489b55e874f/href

With their latest AI features, Apple is already providing a glimpse at what this future may look like — where AI is a deeply integrated background capability that enables enhanced versions of existing experiences.

Apple has defined the table stakes for what an AI-powered device should be able to do. Some of the new Apple Intelligence features don’t even feel like AI, they just feel like smarter tools.
Sara Perez, TechCrunch

Designing the (AI) system

AI is fundamentally changing how we build software, augmenting and evolving our capabilities. This is already happening in many areas, as we can see with the success of GitHub’s Copilot for example, which has quickly become integrated into developer workflows. However, AI adoption in product design has been slower, partly due to the complexity of visual spatial canvases compared to code, which is better suited to text-based LLMs. There are indications of how design will change however.

One example already referenced is Figma’s ‘Make Designs’ feature. Despite the problems it faced at launch, it provides a useful glimpse at how product design could evolve. It uses an off-the-shelf LLM in combination with advanced system prompts, that include custom design systems, to generate a first draft design.

We feed metadata from these hand-crafted components and examples into the context window of the model along with the prompt the user enters describing their design goals. The model then effectively assembles a subset of these components, inspired by the examples, into fully parameterized designs.
Noah Levin

While the demo was focused on generating screens for a new app from scratch, this is a less common problem design teams, particularly those in-house and anyone working with their own design systems. In fact the real value is in enabling designers to integrate this feature with their design systems to generate first drafts aligned with their brand or existing applications.

For product designers this could have a profound impact. Instead of spending our team building wireframes, we may increasingly focus our effort on building the underlying systems, ensuring that the foundations for generative AI systems are robust and adhere to codified best practices and accessibility standards.

https://medium.com/media/cb2eca112fa35d8b47225a82fa2cee1e/href

In this scenario, the work of designers shifts to ensure that the systems are aligned with the product principles, values, and strategic goals of the organization. By creating a solid design framework, designers will enable AI tools to generate consistent and high-quality outputs that reflect the brand’s identity and user experience standards — for use by a wider range or roles across organisations.

There could be similar impact for other design roles. UX researchers will also see their roles augmented by AI tools and methodologies. While their core mission of understanding user needs and behaviors will remain, AI will enhance their ability to gather, analyze, and interpret data. By integrating AI into their workflows, small teams will be able to cover more ground, at greater depth, than was possible before.

UX writers may also shift their focus towards the underlying systems — in their case, developing style guides and vocabularies rather than crafting specific UI text. These guides will serve as the foundation for AI tools that generate user interface content, ensuring consistency in tone, terminology, and style across all AI-generated outputs. This approach is already available with tools like Frontitude, which allow UX writers to maintain control over the brand voice while leveraging AI to handle large volumes of repetitive content generation tasks.

Frontitude, an AI writing assistant for design teams..

In each of these examples, the role of design becomes more strategic, enabling designers to have a more impact. It does also beg the obvious question of whether we will need as many designers as we do today? Design roles, as we currently define them, will probably decrease over time. Design has always been in a state of flux however, and my optimistic view is that our roles will evolve to meet new requirements. The boundaries and definitions that we apply to what we do today won’t be the same tomorrow.

Expanding scope and blurring roles

AI significantly broadens the influence of product development roles, including design, UX research, product management, and engineering. It enables us to do more within our existing roles and take on tasks previously outside our scope. This expansion blurs existing boundaries, introducing both challenges and opportunities.

For example, product designers might start writing product requirements documents (PRDs) and strategy documents, traditionally the domain of product managers (PMs). Tools like ChatPRD assist in creating well-structured PRDs quickly, allowing designers to contribute to product strategy and improve team collaboration.

Similarly, AI-powered tools, like Uizard and Canva enable non-designers to create UX flows and UI prototypes. This helps PMs draft initial designs, facilitating early alignment and deeper understanding, ultimately accelerating the design process. If anyone feels threatened by such tools, it’s probably a good indication that they need to diversify their skillset. If one thing is clear it’s that generative AI is only going to continue to displace such tasks, with increasing levels of disruption as models get smarter and more capable.

Magic design from Canva, enabling non-designers to quickly and easily create compelling designs.

In any case, much of the work needed to ship high quality software lies in the soft skills — setting a vision, bringing people on a journey, influencing and collaborating across disciplines to get our ideas into production. Much of this doesn’t change significantly with AI — In fact they will become more important than ever. As Lenny Rachitsky notes, people excel at “people stuff,” such as aligning stakeholders and creating amazing experiences.

What are people best at? People stuff! Aligning opinionated stakeholders, unblocking blockers, pushing teams to work harder, creating amazing experiences, getting buy-in on big ideas, understanding and acting on nuance, etc. …these soft skills are where AI won’t take over for a long while, and thus they are the skills you should be cultivating more than ever.
 — Lenny Rachitsky

There are also new tools emerging that show the potential for the role of software engineers to also get disrupted. Devin is a, according to the creator’s website, “tireless, skilled teammate, equally ready to build alongside you or independently complete tasks for you to review”. While it’s pitched at enabling engineers to focus on more interesting problems, it also opens the door for people with a less technical background to write software.

https://medium.com/media/6c8f59e40b2565073c093780c7309dd5/href

Once again, this doesn’t necessarily mean that the role of engineers will be displaced but rather that the boundaries between roles will become more and more blurred over time. If a designer can mock-up a front-end page to illustrate the design intent (powered by a company’s design system for example), it can only be a good thing in my opinion. It doesn’t mean that designers will be responsible for putting that work into production, as we have seen before — crossing this chasm is not straightforward.

These examples point to an emerging frontier within design that has gained a lot of attention recently — that of the design engineer. People like Jordan Singer (mentioned above), Rasmus Anderson and Julius Tarng point to what future blended roles might look like. They are generalists that span the worlds of design and software. For those who occupy this space, AI tools can turbocharge what they can accomplish. I believe that this points to how product design in particular may evolve.

As we navigate this new landscape, it’s important to remember that while our roles may evolve, the core principles of design — understanding people and solving their problems — remain unchanged. AI will let us do more than ever before, and our adaptability, curiosity, and optimism will be key to thriving as the field evolves. I’m reminded of this quote and reframing of AI by James Buckhouse which sums up the challenge and opportunity well for me.

AI will not mean the death of artists, intellectuals, or anyone else. Instead, it will mean our rebirth, but only if we make it so. Here’s how: we must stop thinking of AI as Artificial Intelligence, and instead think of it as Augmented Imagination.
James Buckhouse

John Moriarty leads the design team at DataRobot, an enterprise AI platform that helps AI practitioners to build, govern and operate predictive and generative AI models. Before this, he worked in Accenture, HMH and Design Partners.

The expanded scope and blurring boundaries of AI-powered design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Comments are closed.