Why making things easy can backfire.
In our rush to automate daily tasks with AI, we risk reliving product design mistakes from long ago.
We can learn a lot from Betty Crocker and IKEA.
Product design history lessons
Betty Crocker’s stumble and recovery
When General Mills introduced the world’s first “instant” cake mixes in the 1940s, they seemed perfect: Add water to a Betty Crocker mix, bake, and enjoy a delicious cake.
But sales disappointed and market research revealed a surprising truth: target audiences overwhelmingly found the process “too easy.” Betty Crocker’s customers didn’t invest enough effort to feel proud. They didn’t feel like they were properly caring for their families, and the product made them feel undervalued.
The solution? The recipe was modified to require people to add an egg.
This tiny, targeted addition of more work made all the difference. Ads highlighted the egg step, sales soared, and Betty Crocker cake mix was famous.
IKEA, Legos and origami
A set of 2011 Harvard Business School studies tested consumers’ value perceptions of products they assembled, vs. those that experts assembled. The effect was the same for utilitarian products (IKEA boxes) and fun products (Legos and origami animals): participants who constructed products valued them higher than preconstructed versions of the same products.
Modern designers and businesses often focus on minimizing effort and saving customers time. And that often works.
But in many contexts and moments, subtracting effort backfires. Cognitive scientists call this the effort paradox.
The AI parallel
Today’s AI product designers face a similar challenge. In our enthusiasm to automate everything, we risk leaving users feeling disconnected and unrewarded. Just as 1940s homemakers wanted to feel invested in their baking, today’s users often want to feel — and be — meaningfully involved in AI-assisted work.
That’s good because strong collaboration between humans and AI assistants often yields better outcomes than tasks performed by AI models that go it alone. (Soon we’ll dive into how you can leverage that, in an upcoming Mindful AI Design article.)
But outcomes aside, removing too much customer effort from AI product interactions can rob customers of psychological ownership and satisfaction.
Real-world AI examples
AI writing assistants
In a 2024 University of Waterloo study, participants wrote short or long text prompts, which were fed into an AI service that generated stories.
People who created more detailed and extensive prompts reported notably stronger feelings of psychological ownership over the final stories, compared to those who provided shorter, simpler prompts. (To a point. The beneficial effects plateaued as the input neared the output story length — around 150 words.)
Notably, the perceived quality of the resulting stories didn’t vary based on added effort.
Reimagining the car owner’s manual
The principle played into my work on Smart Manual, an AI-powered conversational car manual and repair assistant. Early concept testing revealed a consistent theme: While respondents appreciated faster answers to car problems, the were uncomfortable relying solely on the AI’s advice and instructions. Two users explicitly requested the ability to cross-reference the original car owner’s manual to verify the AI’s troubleshooting reasoning.
Early sketches of Smart Manual concept. In interviews, drivers expressed reservations and distrust about this version of the AI concept. They wanted more direct views into actual owner’s manuals.
We updated the Smart Manual interface to surface relevant diagrams and excerpts from the car manual where appropriate, and to always link clearly and directly from AI summaries to the original manual material.
That encourages drivers to verify the agent’s information. And it helps them dig deeper, and learn more.
Feedback drove us to integrate content from the original car owner’s manuals directly and frequently throughout Smart Manual interactions. This makes it easier for drivers to double-check the AI’s accuracy, and it helps them feel involved in maintaining, repairing, and learning about their cars.
This approach requires a bit more user effort — a bit more taps, more scrolling, more thinking, and more deciding. But it builds trust and human involvement, and we hope it will boost the tool’s credibility. We hope to amplify drivers’ capabilities by framing the AI as an aid to making informed decisions, rather than as a magical cure-all.
Finding the Sweet Spot
As these examples show, setting the stage for meaningful human engagement in AI design requires the right balance between over-automation (excluding people from work where they bring, and derive, value) and human drudgery (bogging people down with work they don’t enjoy and that AI is well suited to deliver).
It’s about identifying those places where a bit of extra effort can boost users’ sense of accomplishment, control, and investment — and foster a sense of ownership and mastery.
That sweet spot will vary for different users and contexts. Some will call for more hands-on involvement, while others will need more automation.
Guidelines for AI Product Designers
1. Find the optimal human touchpoints. Distinguish between genuine friction and moments of meaningful effort — where human involvement boosts value and enjoyment. Consider:
What parts of this process give people a sense of accomplishment?Where does human judgment add genuine value?When is the effort the point — when people want to grapple with a problem or topic?How can we augment, rather than replace, human capabilities?
2. Preserve human agency. Give people clear control over key decisions.
3. Show the work. Make AI processes transparent enough to keep people informed and involved, and design clear means to verify AI outputs. (This requires a nuanced balance. Too much explanation can bog down users — and AI tools too.)
4. Help people learn, and help them refine explorations based on their learnings. That often beats flat, one-shot answers.
5. Consider customization. For more advanced users and contexts, controls for people to adjust levels of automation can be appropriate.
Remember: effort isn’t our enemy.
WALL-E’s world of over-automation. It was a beautiful movie. Let’s not make it a reality.
Looking Ahead
There’s a lot more to dig into here with research around human effort, perceptions of effort, and how this can impact the design of AI products.
Studies already suggest that:
• Human perceptions of effort aren’t fixed and can be shaped dramatically by learning and experience.
• Different people judge levels of effort differently.
• Rewarding mental labor now can boost people’s willingness to expend effort in the future.
As AI capabilities expand, the temptation to over-automate will grow stronger. That’s not an AI thing, it’s a human thing. We repeat that story with every new wave of technology. (Remember The Tragic Life of Clippy?)
It’s our job as mindful designers to steer past that tendency.
It’s time to set aside the false binary of “manual vs. automated” to ask more nuanced questions.
This will be key to designing AI products that respect human agency while amplifying human potential.
What do you think?
Have you felt unsatisfied with an AI tool that made something “too easy?”
Have you encountered examples of the effort paradox in customers’ reactions to an AI product or service you’re working on?
Have you had any success finding the sweet spot between overautomation and too much hassle, or finding those optimal human touchpoints and moments when injecting a bit of human effort can boost enjoyment or engagement?
Please tell us about it in the comments.
“Satisfaction lies in the effort, not in the attainment…”― Mahatma Gandhi
Part of the Mindful AI Design series. Also see:
Do mosquitoes bite leeches? Keys to calibrating trust in AI product designBlack Mirror: “Override”. Dystopian storytelling for humane AI design
Related
The Effort Paradox: Effort Is Both Costly and Valued — PMC
The “IKEA Effect”: When Labor Leads to Love — Michael I. Norton Daniel Mochon Dan Ariely
Rewarding cognitive effort increases the intrinsic value of mental labor | PNAS
15 Times to use AI, and 5 Not to — by Ethan Mollick
The effort paradox in AI design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.