Aug 15, 2024
69 Views
Comments Off on Fire, ready, aim
0 0

Fire, ready, aim

Written by

Design Strategy

Fire, Ready, Aim

The frantic race to shoehorn half-baked AI features into enterprise products is a cynical subversion of foundational UX principles. Your customers may never forgive you.

The other day, I noticed that the logo for a business communications tool a client makes available to me has changed. Dialpad, which bundles a variety of VoIP, video, and text messaging tools, had previously boasted a simple, clean logo that pointed to what it actually does: a pair of speech bubbles. Now, a revamped new new logo has dispensed with that quaint contrivance for what somebody there must believe is an upgrade: instead of speech bubbles, now we just get the letters A and I in what looks a lot like the NASA font on a graded neon purple background. The Dialpad logo is now just AI. Full stop. Everything they want you to know and remember about Dialpad as a brand is captured in that little square badge. Forget their communications solutions. Forget their brand strategy and legacy. Forget, certainly, what their loyal customers have come to Dialpad for. Nothing matters, apparently, except that the company has hitched its star to the fizziest business buzzword of the moment, even as the fizz begins rapidly to fizzle.

The recent evolution of the Dialpad logo

If we can separate the current hyperbole around AI from the hands-on reality, and setting aside legitimate concerns over what may be ultimately unsustainable energy requirements, and assuming that the profound challenges large language models have with bias and accurately representing fact-based reality are eventually solved, and that the myriad destructive and malicious uses to which it’s being put (to say nothing of how it’s being trained) are somehow mitigated, AI may eventually prove a net benefit. It’s hard to say just yet.

But what is certain is that many of the ways enterprises have rushed to bolt on AI to their products and services, never mind their brand images, reveal precious little consideration of any actual UX strategy and trample foundational principles of user centered product design and, often, well established heuristics of usability. It’s also a good bet that the haste to react to still unproven market hype and satisfy impatient investors and restless creative directors is likely to degrade the overall user experience in the short term, making the thoughtful adoption of AI in the general population in the long term less likely instead of more.

There’s no doubt many casual users of these AI tools have found compelling uses for them, largely through experimentation. That’s good and expected. It’s through this individual, iterative process of exploration and play that we’ll eventually land on the optimal distribution of human-computer interaction — what can we confidently delegate to AI, and what we as humans must do.

In the meantime, however, the two-year-long rollout (if it can be called such, generously) of generative AI’s most popular tools to date, large language models, presents a textbook study in how not to think about technology in relation to problem solving and, especially, human beings. What we’ve witnessed, from otherwise design-forward companies who surely know better, is an embarrassing self-own in which technology has been allowed to lead the innovation process instead of being a tool in service of that effort. A global survey from Mind the Product recently found that only 15% of product leaders in North America and Europe report their users are embracing new AI features. Their conclusion? Business and product teams must work harder to drive the adoption of AI features. This is feature-driven, cart-before-the-horse design at its worst.

Their haste to react to still unproven market hype and satisfy impatient investors and restless creative directors is likely to make the thoughtful adoption of AI in the general population less likely instead of more.

Design, fundamentally, is the craft of human-centered problem solving. Every decent designer knows that we cannot begin with a solution and devise a problem for it to solve (though one might reasonably respond that this is precisely what advertising is for). Creating a useful, usable, and delightful solution must first start with understanding exactly what problem we are solving. Only then can we create a fit-to-purpose solution.

This is as true of businesses as it is of products. The overwhelming majority of new businesses, big and small, fail not because they don’t have an interesting product, or a compelling technology, or strong marketing strategy, or responsive support, or ample funding, but because their product does not solve a clear problem for someone. Too many otherwise intelligent business people start with the product, or the “tech,” and hope that marketing and scale will do the rest — an almost certain path to irrelevance. We cannot lead with the solution; as Steve Jobs famously observed: “You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.”

Throwing the tech de jour at a problem — or worse, and still more common, going all in on a new technology (blockchain, crypto, autonomous vehicles, brain-computer interfaces, the metaverse and XR, etc.) without having a clear understanding of what problem it is the solution to — is a recipe for throwing away a great deal of time and money, frustrating your customers, undermining your organization’s mission and brand, blindly driving your business off a cliff, or all the above.

As Steve Jobs famously observed: “You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.”

We’ve been here before. The first dot-com internet bubble was a perfect storm of just such errors. Entrepreneurs of every stripe developed shiny new business ideas based on the nascent World Wide Web and hotfooted it to their nearest venture capitalist. The low interest rates of 1998 assured plentiful cheap money, and investors fell over themselves in their eagerness to fund almost any new businesses with a URL and a .com after its name, in many cases without a second look at the business model or the value proposition at its center. The inevitable result: the dot-com bubble burst in 2001, U.S. stocks lost $5 trillion in market capitalization, and by October 2002, the NASDAQ-100 had dropped 78% from its 1999 peak.

Or consider the 2013 launch of Google Glass, the “smart” headwear absolutely nobody needed. Engineers at Google X, smitten with the technological possibilities and designing mainly with themselves in mind, were blindsided (pun fully intended) by the negative public response to their toy: in public, it isolated wearers inside a private bubble of heads-up data; it allowed users to surreptitiously photograph and record anyone they looked at; it used Google’s face recognition technology to identify even strangers; it immersed wearers in a constant stream of distracting visual notifications and data. Wearers were almost instantly derided as “glassholes,” and myriad public facilities around San Francisco sprouted signs prohibiting the use of Google Glass on the premises. In the product’s several-year development process, nobody at Google ever seemed to have paused to ask themselves, “What human problem are we solving?”

Google Glass in 2013: an engineering- and technology-led product design in search of a customer problem.

You’d imagine that Big Tech might have learned a lesson or two from these boondoggles. But no. See: Meta’s metaverse fixation and the Apple Vision Pro. See also: driverless cars (how much new innovative, affordable, and accessible public transportation might the gazillions sunk into this pointless pipe dream have bought us?)

An important caveat here is that it’s easy to rationalize every new product and service as solving a business problem (We need more users! We need more money! We need more growth!); but the business problem is always secondary to the people problem that the business exists to solve. (If a successful business does not exist to solve a people problem, I’d argue it’s probably not a business per se but something more akin to a social parasite; see hedge funds or Trump Steaks).

That’s not to say that businesses must operate as charitable social services, but that solving problems for our customers is how we best serve the business. If you’re a global social media platform, pumping billions into a name change, a new brand identity, and a pivot away from your core product into “the metaverse” simply because you fantasize about people interacting as disembodied legless digital avatars inside an immersive total-surveillance ad-delivery system does not solve a material problem for your customers, no matter how much rebranding promotional confetti you spray at them. On the other hand, directing those same billions into effective content moderation, eliminating rampant misinformation, mitigating political polarization, implementing child age verification systems, and reducing the prevalence of scams and human trafficking would, in fact, address real and vexing problems your customers are facing daily.

Simply because you fantasize about people interacting as disembodied legless digital avatars inside an immersive total-surveillance ad-delivery system does not solve a material problem for your customers.

When businesses fail to identify a clear human problem around which they can craft a compelling value proposition for their solution, they inevitably fall back on the all-purpose catch-all of claiming improved efficiency (doing the same with less) — or its close cousin, productivity (doing more with the same). But efficiency and productivity, however compelling they might be as marketing hooks, are not problems, per se, that are amenable to solutions, mainly because they have no fixed measures of success; every advancement toward the goalpost simply results in the goalposts being moved further away.

Consider: A hundred years ago, philosophers and economists (there were no “futurists” then) warned that our obsession with ever-improving efficiency would lead to a future of overabundant leisure. John Maynard Keynes, citing the torrent of technological innovation the early 20th century had unleashed, foresaw 15-hour workweeks and a world in which citizens’ biggest problem was figuring out what to do with all their leisure time. And yet somehow, despite the innumerable advances in efficiency over lo these hundred years, Americans regularly report feeling overwhelmed and overworked. When was the last time anyone you know complained about having too little to do? All those time-saving tools and productivity hacks seem to have accomplished exactly nothing — in fact, less than nothing; college-educated Americans report feeling more overwhelmed, more stressed and busier than ever. Our worries over efficiency, nurtured and commodified by Big Tech and fashioned into a bottomless appetite for consumer “fixes” and a thriving cult of productivity by capitalist imperatives, is no more a real problem than a logo that doesn’t sufficiently broadcast your abasement before the tech idol of the moment.

(And anyway, the goal of the design process is rarely “efficiency” in and of itself. One end result of a good design may very well be a more efficient process — but that will be because we prioritized making the process easier for human beings to use, and collaborate within, and to reach their desired outcomes. We achieve our objectives by designing for human beings and what they’re trying to do, not by aiming for efficiency per se.)

After decades of products and services that promised to make us more efficient, where is the utopian future of boundless leisure it was all supposed to usher in?

All of this is not to say that we should let new technologies languish because their application is not immediately apparent. The speed with which shockingly capable large language models were released upon the world, and the pace at which they’ve improved since then, has created the illusion that current AI tools are more akin to a discovery than an invention, some fundamental property of the universe that we have no choice but to put to immediate and profitable use. Rocketing to 100 million users in two months made OpenAI’s toy the fastest-growing consumer software application in history, and the fact that it did so without a clear use case other than being what might best be called a probabilistic content extruder suggests that, as so many believed of the World Wide Web of 1998, it’s certainly exciting and it must be good for something.

The World Wide Web eventually figured out its purpose: to hijack our dopamine reward pathway in order to divide, track, and surveil us in the service of getting us to buy more shit. (Kidding, not kidding.) And it’s fair to say the early Internet, too, started out as a novelty in search of applications, even as its origins with DARPA as a nuclear strike-proof decentralized communications system, and the 1990 creation by Tim Berners Lee of hypertext, gave us some strong hints about where it might prove most useful.

So the initial flurry of experimentation and exploration that followed the public release of Open AI’s first ChatGPT model in 2022 was gratifying to see and participate in. Each of us had a sandbox in which we could safely (for the most part) explore these tools and their capabilities and the creative uses to which they could be put. Very quickly, and to the great surprise of exactly no one, a subset of users discovered a host of malicious and nefarious applications to which the tool might be pointed. Nevertheless, the magnitude of the public interest, the generalizability of applications, and the much hyped near-future potential for still more capable models was like upending a dump truck of bloody chum in the water for Big Tech’s shareholders and any business seeking a sliver of advantage in the never-ending war for attention.

The resulting frenzy has been a spectacle of hubris, irrationality, and greed, a desperate greased-pig catching competition in which all participants have found themselves degraded, covered in filth and the remnants of broken faith with their customers, and without a pig.

The AI frenzy calls to mind a corporate greased-pig catching competition in which all participants have found themselves degraded, covered in filth and the remnants of broken faith with their customers.

A casual glance at technology headlines at almost any time over the past several months reveals a portrait of customer experience straight from the mind of Hieronymous Bosch: Google’s twisted AI Overviews scrape the nether regions of Reddit to recommend glue as a pizza topping; McDonald’s AI-empowered drive-throughs screw up orders to the tune of hundreds of Chicken McNuggets and earn customer reviews like “dystopian”; Meta has been so hot to get in bed with AI that their much-hyped AI-fueled scientific writing tool, Galactica, that had to be hastily euthanized three days after its launch because of the torrents of authoritative-sounding scientific bullshit it was spewing — what the burned PhD’s who’d tried it called “little more than statistical nonsense at scale.”

A casual glance at technology headlines at almost any time over the past several months reveals a portrait of customer experience straight from the mind of Hieronymous Bosch.

Microsoft’s disastrous rollout of its AI-powered ‘Recall’ assistant, which meant to envelop our PCs in a privacy-shredding total surveillance system because something something AI, was followed by a global meltdown of Windows software whose origins lay in a botched update from cybersecurity firm CrowdStrike. The outage — which affected fewer than 1% of Windows machines yet still managed to disrupt airlines, banks, retailers, emergency services, and healthcare providers around the world — shone a very bright light upon exactly how fragile our global networked infrastructure is before it’s been handed over to AI systems that tell people to eat rocks and plagiarizes without the slightest compunction.

The pressure to prioritize shareholder and market expectations over customer needs is so great that “AI washing” is now a thing: businesses are falsely advertising to consumers that a product or business includes AI when it actually doesn’t. As if, in the face of all this immiseration, customers are clamoring for more AI.

None of this, let’s be clear, has the first thing to do with solving well understood problems for customers. Mark Hurst, writing at his blog Creative Good, puts it perfectly:

“The AIs weren’t developed for us. They were never meant for us. They’re instead meant for the owners of the corporations: promising to cut costs, or employee count, or speed up operations, or otherwise juice the quarterly metrics so that “number go up” just a bit more — with no regard for how it affects the customer experience, the worker experience, the career prospects of creators and writers and musicians who have been raising the alarm about these technologies for years.”

IBM CEO Thomas Watson, Jr. once observed that “Good design is good business,” a statement that’s been marshaled in the support of design-led business cultures since 1973. It’s lost none of its relevance since then. If anything, it’s become more prescient with each passing year, as the experience economy turns technologies and products into mere commodities; and yet big business culture continues, mystifyingly, to move further and further away from a focus on the customer in favor of “growth” at all costs, the customer be damned — what Ed Zitron calls the Rot Economy, and what Cory Doctorow has memorably described as Enshittification.

Author Cory Doctorow

Because good design is in fact good business — and good design is fundamentally a human-centered process. But any tool that cannot reliably produce accurate results and has no actual understanding of true human behavior is not remotely human centered. Yes, AI can effectively mimic human behavior. But it is still pure mimicry. Just because an African Gray Parrot can effectively impersonate the sound of a woman laughing does not mean it understands the first thing about either women or humor. Similarly, current AI tools have no inherent interest in or understanding of the functional, social and emotional needs of human beings, except in the most abstract and superficial ways. And what an AI may have surreptitiously scraped from an internet database or social platform has little or no relevance to your specific users, to their specific needs in their specific context. Throwing AI-enabled features at your product or service that you know may not be around in a year or, god help us, slathering your brand with the flashy iconography of AI because you’re hoping to ride the bubble to an acquisition, does nothing to help your customers solve their problems, and in fact only creates more confusion for them, more frustration, a greater number of options to wade through, and a bigger and more impermeable barrier between them and you.

Any tool that cannot reliably produce accurate results and has no actual understanding of true human behavior is not remotely human centered.

In the generation since the first dot.com bubble burst, business strategy has undergone a profound transformation. The explosive development of the design field known as User Experience, with its roots in human factors design and user interface design, has led businesses everywhere to adopt a wholly new approach to competing in the marketplace, one where success no longer lies in crushing the competition but by out-competing one’s rivals in the mission to better serve customers. Today, UX is a cornerstone of product strategy, a critical component of customer satisfaction, the backbone of most innovation pipelines, and a field with a voice at the executive table. From Walmart and Fannie Mae to Citigroup and State Farm, the Fortune 500 has come to understand that their fortunes rest on continuously meeting their customers’ needs in useful, usable, and delightful ways.

And yet. It seems you can take the man out of the Paleolithic, but you can’t take the Paleolithic out of the man (not that it’s all men, but…). Something about a new technology makes even the most evolved business leaders forget everything they know and revert to their most primitive, reactionary, knuckle-dragging selves. New tool good, exciting, take new tool everywhere, hit things with new tool, show everyone how strong new tool make. Bam, bam, bam!

Meanwhile, the customers whose loyalty businesses have worked so diligently for years to win have watched in dismay as the products and services they’ve come to rely on have been, practically overnight, colonized by an infestation of poorly considered, hastily implemented AI-fueled “features” that, like a break-out of acne, make almost any interaction a nightmarish exercise in humiliation and futility.

None of this is to say there are no practicable implementations of AI in current products and services, to say nothing of the possibilities for heretofore unimagined new ways of better serving our customers. But there is one way to do this right, and it’s the same human-centered process that has enabled organizations with design-led cultures to increase their revenues and shareholder returns at nearly twice the rate of their industry counterparts: start with your customers’ needs, pains, challenges, and frustrations — and work iteratively to the right solution and the best technology from there, not the other way around.

There is one way to do this right, and it’s the same human-centered process that has enabled organizations with design-led cultures to increase their revenues and shareholder returns at nearly twice the rate of their industry counterparts.

Keep experimenting with AI, by all means. Point all your R&D efforts at these tools, encourage your own employees to play around with them to discover what works and what doesn’t. Maybe there are even “efficiencies” to be gained, for whatever that’s worth. But stop making your customers the guinea pigs in this misbegotten experiment, and stop hoping that you can work out what these tools are good for by outsourcing that thankless and time-consuming task to the only people whose goodwill and trust are likely to see you through the economic shock that will follow the inevitable implosion of an AI bubble that even Goldman Sachs says is dangerously overhyped. This time, the fallout is all but certain to take down not just a swath of forgettable and ill-considered startups but, rather, the valuation of all the Big Tech names dominating AI — names that comprise a significant portion of the U.S. stock market’s value and are mainstays of pension funds and retirement portfolios the world over.

Business leaders love to talk about the “moat” they have in place to fend off competition; many are about to find out what happens when the threat comes from inside the fortress. Mess with your customers’ experience at your peril. If you want to know what’s more critical to your business — your moat or your UX, keep shoving superfluous, half-baked AI “solutions” down your users’ throats and find out.

Fire, ready, aim was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Comments are closed.