Nov 24, 2024
30 Views
Comments Off on Miro vs. FigJam: how their AI assistants stack up
0 0

Miro vs. FigJam: how their AI assistants stack up

Written by

Putting AI to work.

Miro Vs Figjam credit: Brutally honest

Over the past year, the battle between Miro and FigJam has gotten a new layer of intrigue: AI. Both platforms have introduced AI assistants designed to make workshopping faster, easier, and more creative.

But as someone who spends half their week running workshops and brainstorming sessions, I had one burning question: are these AI features actually useful, or are they just shiny distractions?

So, I did what any good product manager would do — I spent way too much time testing them. Here’s the breakdown of what I learned.

The setup: what these AI assistants claim to do

Miro AI: Miro’s assistant promises to help with summarizing sticky notes, generating ideas, and even automatically organizing content. It’s like the digital equivalent of that one colleague who loves turning chaos into order.

FigJam AI: FigJam leans into being your brainstorming buddy. Think ideation prompts, group clustering suggestions, and contextual tips to help you keep momentum during your session.

On paper, they sound like they’ll save time and make workshops feel seamless. But how do they work in real scenarios?

The Tests: Putting AI to work

I ran three tests with each platform to compare their AI assistants in action.

1. Generating ideas:

I used the prompt: “Suggest ideas for a new team-building activity.”

Miro delivered 15 sticky notes with team-building activity suggestions. While the breadth was decent (think art workshop, charity volunteer day, and escape rooms), it felt like a brainstorming session you could get from a quick Google search.

Miro gave me 15 sticky notes with what felt like random ideas

Curious to test the depth, I gave it a follow-up prompt:

“Deep dive into the escape room challenge.”

The response? Miro provided a generic list of things to consider, like “theme,” “venue options,” and “team size.” While somewhat useful, it didn’t add much value beyond surface-level suggestions.

When nudged to give me more details about a specific one, Miro gave me another 15 sticky notes with themes to explore the “escape room challenge”

FigJam’s Take

FigJam took a slightly different route. Instead of listing a variety of activities, it structured a potential workshop, including an ice-breaker, collaborative challenges, and a reflection session. While this approach felt more actionable, it leaned heavily on process rather than creative ideation.

Figjam game a workshop structure with three sections, an icebreaker, a challenge and a feedback section

To match Miro’s test, I gave FigJam the same follow-up prompt:

“Focus on the escape room challenge.”

FigJam responded with a mind map that broke down the activity into key elements:

Theme ideas (e.g., haunted house, pirate adventure)

Puzzles (e.g., logic games, physical challenges)

Team dynamics (e.g., roles, communication tips)

Time constraints (e.g., countdown, pressure)

Room elements (e.g., props, lighting)

This additional layer of detail made FigJam’s response feel more tailored and creative, with practical suggestions that extended beyond just the initial idea.

Figjam when prompted to give me more details on the escape room challenge broke things down into a tree

Key takeaways

Miro excels in generating a variety of ideas but struggles to go deeper without additional input.

FigJam shines in creating a structured framework and offers more detailed follow-ups, particularly for planning activities like the escape room challenge.

In this round, FigJam’s ability to add meaningful depth and organise the information gave it the edge — especially for teams that want to move quickly from ideas to execution.

2. Organizing content:

I tested both Miro and FigJam by dropping 20 sticky notes into each board and asking their AI to organize them into clusters.

Miro easily clustered the words into groups

Miro handled this effortlessly, grouping the words into meaningful categories with ease. The user experience was straightforward — simple to navigate — and the results were spot on.

Miro clustered the words in an accurate way

FigJam, on the other hand, struggled. While the UX was equally intuitive (you select items and use a dropdown to cluster), the output was far less reliable. For instance, it categorized my cat as furniture and my breakfast banana as a color!

Figjam AI did not manage to cluster simple words under meaningful categories

As a bonus, I decided to push Miro further by testing it with a more challenging list of words. Once again, it performed impressively, grouping the words into accurate and meaningful clusters without much trouble.

Miro’s output when given a harder list of words was pretty accurate.

For comparison, I fed the same list of words to ChatGPT to see how it would cluster them. Here’s the result:

“Here’s a possible grouping for the challenging list:Natural Elements: Quartz, Ash, Seaweed, Lichen, Maple, Dandelion, ThornAnimals: Wolf, Penguin, Salmon, Scorpion, BatTools: Shovel, Wheelbarrow, Pulley, Wrench, GrinderGuidance/Reflection: Compass, Mirror, LanternThese groupings reflect different overarching themes like nature, animals, tools, and abstract or symbolic items. However, some words like “Ash” or “Lantern” could fit multiple groups depending on context, making it less straightforward.”

Overall, Miro clearly outshines FigJam in clustering accuracy and reliability.

3. Summarizing key takeaways:

Finally, I created a deliberately chaotic mock brainstorming session to simulate the typical messiness of workshop outputs — fragmented ideas, incomplete thoughts, and scattered priorities. The session was split into two sections: one for ideation (10 sticky notes) and another for next steps (10 sticky notes). I then asked each AI assistant to summarize the workshop.

Miro did a pretty good job at transtorming the chaos of the workshop into a clear document

Miro excelled at making sense of the chaos, transforming the output into a clear and structured document. It effectively grouped the ideation section into relevant sub-themes and organized the next steps, making the summary practical and easy to use.

FigJam, on the other hand, struggled to separate ideation from next steps, which made the summary harder to follow. While the addition of emojis was a fun touch, the lack of clear differentiation made the output less helpful.

The output from Figjam was not very helpful as it didn’t differentiate ideation from next steps

Once again, Miro came out ahead, providing a well-organized and actionable summary that effectively categorized the workshop’s key points.

4. Approaches to AI: pragmatic vs. playful

The contrast between Miro and FigJam’s AI assistants ultimately reflects their different problem-solving philosophies:

Miro: Pragmatic and utilitarian. Miro’s AI is built for teams that value order and efficiency. It’s a dependable assistant for summarizing, organizing, and handling chaotic inputs with precision. Perfect for corporate workshops, project managers, or structured strategy sessions where clarity is non-negotiable.FigJam: Playful and creative. FigJam’s AI brings a more dynamic, brainstorming-friendly vibe to the table. Its strength lies in generating ideas, structuring frameworks, and offering fun, collaborative moments — ideal for design teams, startups, or anyone looking to inject personality into their workshops.

5. The verdict: which one should you Use?

It all comes down to the type of workshops you run and the outcomes you need.

Choose Miro AI if…
You need structured outputs and actionable insights. It’s your go-to for managing messy ideas, summarizing key takeaways, and delivering clarity in a fast-paced environment.Choose FigJam AI if…
You prioritize creativity and enjoy a more interactive experience. FigJam’s AI thrives in ideation sessions and adds a touch of playfulness to your workflow, even if it occasionally stumbles with structure.

Final thoughts: AI in whiteboards — game-changer or just hype?

Both Miro and FigJam are integrating AI in ways that speed up mundane tasks and enhance collaboration, but they’re far from perfect. A notable limitation I found is the inability to refine outputs or give specific feedback to the AI assistant. Without the flexibility to go back and forth, their potential is sometimes undercut by rigidity.

Still, these tools push the boundaries of how we collaborate, and it’s exciting to imagine how they’ll evolve to better anticipate user needs.

For now, I find myself using both — Miro when I need precision and FigJam when I want creativity. What about you? Have you tried their AI assistants yet? Which one do you think works best for your team?.

Here are some topics/websites I recommend going through

If you want to go further, I can’t recommend enough reading:

How UX can be a key differentiator for AIDesign as Thought: AI and the Future of Design

Miro vs. FigJam: how their AI assistants stack up was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Comments are closed.