Jun 26, 2024
62 Views
Comments Off on AI anxiety and how to design for it: Resources and best practices
0 0

AI anxiety and how to design for it: Resources and best practices

Written by

Every third adult is feeling AI anxiety. By understanding it through fear acquisition theory and designing user-centered, transparent AI systems, we can reduce these concerns and foster trust and confidence in AI technologies.

Source: https://www.rappler.com/tachyon/2023/06/20230621-Anxiety-opportunity-AI.jpg?resize=1280%2C720&zoom=1

Artificial Intelligence has been a field of research since the mid-20th century. However, recent models like ChatGPT and Midjourney have pushed the boundaries of AI capabilities, leading to increased attention and accelerated development in the field.

This rapid progression has led to AI anxiety among nearly one-third of adults, with 8% describing feelings of fear or dread. This emotional response stems from various concerns, including the future of employment and potential long-term societal issues. The debate surrounding AI is polarized — some champion its potential to revolutionize areas like scientific research, while others remain apprehensive. In times of such uncertainty, it is only natural to feel anxious.

In this article, I will try to see AI anxiety through the lens of fear acquisition theory, breaking it down into four components.

By empathizing with users’ anxiety and understanding its underlying elements, AI product creators can design trustworthy solutions that address these concerns. This user-centered approach will help reduce anxiety, encouraging greater acceptance and confidence in AI-driven advancements.

Computer anxiety and AI anxiety

Computer anxiety, studied since the 1980s, relates to unease about using computers and has decreased over time. AI anxiety, on the other hand, shares similarities but is distinct due to its unique characteristics. Both cause learning anxiety, but their differences make AI anxiety a separate concern in today’s world.

What are the differences?

AI has the ability to make autonomous decisions, so it can operate without human control, which is different from computers.AI can take many forms, both virtual and physical, like human figures or digital avatars. In contrast, early computers lacked this diversity in appearance.AI, like chatbots and anthropomorphized helpers, can offer personalized services. In contrast, computers lack these personalized service features.

Due to these differences, specific concerns unique to AI have emerged. Here are a few of them:

There are concerns that AI may produce artificial consciousness, which will give rise to a condition where AI exists independently and may not be controlled by humans. By contrast, consciousness has not been discussed in previous computer anxiety studies.Concerns about the opacity of AI operations and decision-making processes bring unpredictable risks. Research on computer anxiety doesn’t address the impact of machines making decisions independently.AI makes decisions based on calculations and pros-and-cons analysis, which can lead to concerns about discrimination and bias, unlike computer anxiety, which doesn’t involve discrimination.

AI anxiety and fear acquisition theory

Rachman’s (1977) fear acquisition theory and Menzies’s (1995) noncorrelated fear acquisition theory are used by psychologists and researchers to analyze anxiety in general. These theories have been foundational in the field of psychology and are employed by professionals and scholars studying various anxiety disorders. They form an integrated model for analyzing the acquisition of AI anxiety through four pathways:

Conditioning: a pathway when fear arises from direct, traumatic experiences. For example, if someone is bitten by a snake, they may develop a fear of snakes.Vicarious Exposure: a pathway when fear develops by observing others’ traumatic experiences. For instance, watching someone get bitten by a snake can lead to a fear of snakes.Information transmission: a pathway through which fear is acquired by being informed about potential dangers. Being told that snakes are dangerous and can bite may cause someone to fear snakes, even without personal or observed experience.Innate Fears: fears are inherent and not based on personal experiences. Examples include a natural fear of heights or darkness.

Jian Li and Jin-Song Huang took their research further by conducting a survey with 494 participants. Their goal was to analyze how AI anxiety manifested in these individuals and categorize it according to the acquisition pathways based on fear acquisition theory. Here is the final categorization

Chart by the author

Let’s examine each category and find examples of how AI product designers can make user interaction with the product more comfortable or adjust the design process to meet the needs of users dealing with AI anxiety.

Path 1: Conditioning

Privacy violation anxiety

Privacy violation anxiety anxiety occurs when users experience direct violations of privacy by AI. AI systems often require large amounts of personal and sensitive data, leading to concerns about constant surveillance and loss of personal privacy. The risk of data breaches and cyberattacks further intensifies fears about the security of personal information. Additionally, individuals often feel a loss of control over their own data, as they typically have little say in how it is used, shared, or stored once collected by AI systems. This perceived lack of control, coupled with the fear that AI might misuse personal data for intrusive or harmful purposes, such as targeted advertising or identity theft, can significantly contribute to anxiety.

One possible option to reduce privacy violation anxieties is to comply with existing privacy regulations and clearly explain to users which data is used, how, and why. For example, in the screenshot below, Workable describes the types of personal data in a way that is easy to understand in plain language. This transparency helps build trust by ensuring users are informed and aware of how their data is handled, thereby reducing anxiety related to privacy violations.

Source: https://mobbin.com/screens/8404a62c-0ceb-4dca-a586-ed08e5d18b65

Of course, compliance with privacy regulations is crucial, as it ensures that organizations follow legal standards designed to protect user data. Some key privacy regulations include:

General Data Protection Regulation (GDPR): Enforced in the European Union, GDPR mandates strict data protection and privacy laws, giving users more control over their personal data.California Consumer Privacy Act (CCPA): This regulation provides California residents with the right to know what personal data is being collected and how it is used, as well as the right to access, delete, and opt out of the sale of their data.Health Insurance Portability and Accountability Act (HIPAA): In the United States, HIPAA sets standards for protecting sensitive patient data, ensuring that healthcare providers safeguard medical information.Personal Data Protection Act (PDPA): In Singapore, the PDPA governs the collection, use, and disclosure of personal data, ensuring that organizations manage personal data responsibly.Children’s Online Privacy Protection Act (COPPA): This U.S. regulation protects the privacy of children under 13 by requiring parental consent to collect and use children’s personal information.

Here are some resources to learn more about privacy in the AI context:

EU AI Act: first regulation on artificial intelligenceKPMG report: Privacy in the new world of AI. How to build trust in AI through privacy. ​​The impact of the General Data Protection Regulation (GDPR) on artificial intelligence a study by Professor Giovanni Sartor for European Parliament

Bias behavior anxiety

Bias behavior anxiety occurs when users experience discrimination by AI. This may happen because AI can treat different groups unfairly based on biased data or profit-driven designs. Examples include gender and race discrimination in facial recognition and financial status assessments. Such biases make people anxious and uncomfortable, leading to negative emotions toward AI.

To prevent biased behavior in AI systems, designers can adopt inclusive design practices:

Conduct Extensive User Research and Testing: Engage with diverse groups to understand different perspectives and potential biases.Have a Human-in-the-Loop: Incorporate human oversight in critical decision-making processes to catch and correct biased AI behavior.Feedback Mechanisms: Create channels for users to report experiences of bias or discrimination and ensure their concerns are addressed.

For example, Perplexity allows users to give feedback on generated answers, allowing them to report any harmful or offensive content. This helps mitigate biased behavior by enabling continuous monitoring and improvement of the AI system. User feedback highlights issues that might not have been identified during initial testing.

Source: https://mobbin.com/screens/34733667-c280-4b58-8225-074d5eeef3e1

Path 2: Vicarious exposure

Job replacement anxiety

Job replacement anxiety is the fear of being replaced by AI, driven by observing others’ experiences or worrying about AI taking over various jobs.

Recent research has shown that mental health of employees has been impacted, with the American Psychological Association’s 2023 Work in America Survey revealing that close to 2 out of every 5 workers (38%) worry that AI might make some or all of their job duties obsolete in the future. (Source)

The advancement of self-driving cars suggests that human drivers might soon be replaced by automated systems. AI in healthcare could displace many practitioners, and improvements in natural language processing could make human translators redundant. This anxiety falls under vicarious exposure because, although AI has not yet replaced human labor on a large scale, the fear stems from speculations about the future. Since most people haven’t experienced being replaced by AI, this type of anxiety is classified as observed anxiety, based on witnessing potential future changes rather than direct experience.

Addressing this type of anxiety involves adopting a user-centric approach. This includes:

Involving employees in designing and implementing AI systems to ensure these tools are user-friendly and meet their needs.Focusing on the human user ensures comfortable and engaging interaction with the system.Mechanisms for continuous employee feedback on AI systems allow for iterative design improvements based on this feedback.Highlighting the importance of human skills essential for successful human-AI collaboration helps mitigate concerns about job replacement.

For example, Framer emphasizes human creativity with the title of their hero section, “The Internet is your canvas.” This phrase, especially with the first word on the second line in any screen resolution, attracts attention due to the Principle of Personalization and the Serial Position Effect.

Source: https://mobbin.com/screens/67ae6a72-325b-4fe7-8897-65d8b1b941e4

Similarly, Runway accentuates human creativity, underscoring the essential role of humans in AI collaboration. These approaches reinforce the value of human creativity and skill in the era of AI, reducing anxiety about job replacement.

Source: https://mobbin.com/screens/c28d5610-1a17-4ee0-af32-0cfa4788cbb7

Learning anxiety

Learning anxiety is the anxiety caused by observing others’ experiences or by the challenge of learning AI. It stems from a lack of self-confidence in mastering AI, often perceived as difficult. This perceived difficulty can generate significant anxiety. Just as learning computers can cause anxiety, so can learning AI. Since AI is complex and algorithmic, many people find it hard to learn, especially when AI programs have surpassed human experts in certain fields, further diminishing confidence. Seeing AI as a threat and feeling unable to learn it can create anxiety. Most people experience this anxiety through observation rather than direct experience, making it a form of vicarious exposure.

Addressing this anxiety involves creating high-quality and accessible AI learning resources, including step-by-step tutorials that gradually introduce AI concepts, starting with the basics before moving to more advanced topics. For example, Clay provides hands-on advice in the context of the task the user is performing. This approach reduces cognitive load compared to, for example, an onboarding checklist. Additionally, Clay offers links to videos that demonstrate examples in practice, providing visual demonstrations and addressing different channels of information intake, such as auditory and visual learning styles, thereby making the learning process more comprehensive and less intimidating.

Source: https://mobbin.com/screens/34195b59-190f-460f-9f7f-32f4a1eb24a1

Path 3: Transmission of information and instruction

Existential risk anxiety

Existential risk means the chance that all intelligent life on Earth could be wiped out. When it comes to AI, existential risk anxiety is the fear that AI might destroy human life or drastically limit our future potential. It might sound exaggerated, but look at the numbers: in the anxiety study conducted by Joud Mohammed Alkhalifah and colleagues, 96% of participants experienced the fear of death related to the rapid advancements in AI.

Another concern is that if AI tries to achieve goals set by humans, it might make mistakes and harm us instead, which also adds to the anxiety. Right now, the threat of AI wiping out humans isn’t real, but media stories with buzzwords make people worry that it could happen in the future. This type of anxiety is based on what people hear and read about AI rather than direct experience.

Source: https://mobbin.com/screens/bb6d5226-e40c-4a16-bcfc-22b7bf44f690

Ethics violation anxiety

Ethics violation anxiety arises from fears that AI might behave unethically when interacting with humans. As AI becomes more integrated into our lives, ethical concerns grow. For instance, how should self-driving cars balance the safety of passengers and pedestrians? Is it ethical for people to form romantic relationships with AI? Is it unethical for AI to cheat humans, similar to dishonesty among humans?

These questions make people anxious about AI’s ethical implications.

How can we reduce the anxiety? Designers can ensure AI systems comply with existing regulations and standards related to ethics and human rights. Since ethics can be a somewhat vague concept, it is more practical and objective to rely on established regulations:

Key regulations

EU: General Data Protection Regulation (GDPR) [summary] and AI ActUSA: California Consumer Privacy Act (CCPA) [summary] and Algorithmic Accountability Act (Proposed) [summary]Singapore: Personal Data Protection Act (PDPA)

Key standards:

IEEE 7000 Provides guidelines on various ethical aspects, including transparency, accountability, and human rights considerations.ISO/IEC 27001 specifies requirements for an information security management system (ISMS).ISO/IEC JTC 1/SC 42 focuses on AI standards, covering topics such as AI concepts, terminology, and trustworthiness.NIST AI Risk Management Framework — helps organizations design AI systems that are reliable, robust, and minimize risks to individuals and society.OECD AI Principles — offers guidelines for the responsible development and use of AI.

Path 4: Innate fears

Artificial consciousness anxiety

Artificial consciousness anxiety is the inherent worry that artificial consciousness might undermine the uniqueness of human intelligence. Many experts are skeptical about the feasibility of creating truly conscious AI, arguing that consciousness is deeply tied to biological processes and that current AI lacks the necessary complexity and integration of information required for true self-awareness. Despite this skepticism, the concept is frequently explored in movies and other media. This uncharted territory creates a cognitive and emotional response of anxiety as individuals face a situation with no past experiences or coping mechanisms to guide them. The resulting uncertainty and fear arise from the difficulty in predicting and managing the potential implications of AI consciousness.

Artificial consciousness anxiety can be closely linked to the concept of the Uncanny Valley. This phenomenon describes people’s discomfort and unease when encountering entities that appear almost human but are not quite perfect in their resemblance. As AI advances and begins to exhibit behaviors and appearances increasingly similar to humans, this near-human likeness can evoke a sense of eeriness and threat.

For example, Pi could have created an uncanny effect by explaining why it is a good friend. Instead, it tells the user it is an AI system and states its value, which encourages critical thinking from the user.

Screenshot by the author

Replica answers the same question with “Maybe,” which may create a sense of uncanny valley.

Screenshot by the author

This is a very under-researched area, which can result in consequences worse than the Uncanny Valley — such as treating AI as sentient beings and developing unhealthy attachments. There have been cases highlighting the dangers of such behavior. For example, a Belgian man died by suicide after developing an emotional connection with a chatbot. This phenomenon is related to what some call the “ELIZA effect,” wherein people fall in love with AI, sometimes leading to serious emotional and psychological consequences.

To address artificial consciousness anxiety, designers can take several measures when designing AI systems:

Clear AI Identity: Make it clear that the AI is not a human. Regularly remind users that they interact with a machine, not a sentient being. This can be achieved through design elements and direct communication within the interface.Limit Emotional Depth: Avoid designing AI interactions that mimic deep emotional connections. Instead, focus on practical and functional support without delving into areas that could create emotional dependency.Context-Appropriate Responses: Ensure AI responses are context-appropriate and avoid overly intimate or personal language that might encourage users to develop strong emotional attachments.Human Moderation: Where possible, include human oversight to monitor interactions, especially when users display signs of emotional distress.Collaboration with Experts: Work with psychologists and ethicists to understand the potential impact of AI interactions on mental health and incorporate their insights into the design process.

Lack of transparency anxiety

Lack of transparency anxiety refers to an innate fear stemming from the opaque nature of AI decision-making mechanisms. This anxiety taps into our deep-seated fear of the unknown, a survival mechanism that triggers uncertainty about AI’s capabilities and intentions. Our brains are wired to anticipate potential threats in unfamiliar situations, and the ambiguity surrounding AI consciousness can evoke a sense of danger. The subconscious mind constantly seeks order and predictability, and the potential for AI to operate beyond human control disrupts this mental equilibrium, heightening our anxiety.

To address this path of acquiring AI anxiety, designers can integrate features that provide intuitive, easy-to-understand explanations of AI decisions directly within the user interface. Using tooltips, pop-ups, and sidebars can offer brief yet clear insights into why and how decisions were made.

Facebook’s “Why Am I Seeing This Ad?” feature is a great example. It lets users click on ads to see the targeting criteria used, such as demographics and interests and offers options to adjust ad preferences, giving users more control and transparency. Implementing similar features can demystify AI decisions, reduce anxiety, and empower users by providing clear and accessible information about how AI operates and impacts them.

Source: https://www.digitalassembly.agency/blog/social/spy-on-your-competitors-facebook-and-instagram-ads-with-facebook-ad-library/

Final thoughts

Understanding the roots of AI anxiety and implementing user-centered design solutions can mitigate these concerns and foster a more accepting and confident interaction with AI technologies. Ensuring transparency, addressing biases, involving users in the design process, and maintaining ethical standards are pivotal steps in creating AI systems people can trust and feel comfortable using.

I would love to hear your thoughts and ideas on improving AI interaction and reducing related anxieties. What features or approaches do you think could help reduce AI anxiety?

AI anxiety and how to design for it: Resources and best practices was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Comments are closed.