Jan 31, 2025
8 Views
0 0

AI transparency framework

Written by

Deciding when and how to disclose our use of AI-based tools

Photo by Volodymyr Hryshchenko on Unsplash

As AI continues to automate more tasks, it reshapes how we work and create. With tools like ChatGPT and Claude, content creation has become faster and easier, but this shift raises two important questions:

At what point does AI enhancement turn into dependence?Should we disclose our use of AI — and if so, does it signal intelligence or laziness? Going further: What does it signal about the veracity of the content and even the ethics of the author?

AI is increasingly playing a greater role in everything from writing content and generating images to composing poetry and helping us design and develop new products and services. As it supplements human creativity, we must also ask ourselves: What role do inspiration and originality still play in an AI-assisted world?

Great responsibility comes with this great power that AI brings. As practitioners, we must embrace transparency about the role AI-based tools play in our work: not as a judgement, but as a commitment to intentional and ethical use. This additionally allows readers to make informed choices about how to interpret the content as well.

Philosophy of disclosure: why transparency matters

Why does transparency matter? Should we disclose our use of AI in the first place? If so, when and how should it be disclosed?

Transparency matters because it ensures Trust between the author and the reader. It’s often an unspoken ethical contract: I sign my name on this article and you, the reader, believe me when I said that I wrote it (I did). But AI disrupts this assumption.

Definitions of authorship

Throughout history, technological advancements (from the printing press to photography to automation) have challenged what counts as originality and authorship.

The printing press disrupted the oral tradition and what counts as storytelling, but writers still took credit for books.Digital photography replaced film, but professional photographers still claim authorship.

AI-generated content raises a new question: If AI co-creates, how should credit be assigned?

When a human writes entirely on their own, authorship is clear. Best practices dictate that ideas and quotes should be cited and referenced as per the style guide you use. But what happens when you add AI to the mix?

If AI assists with background research, is it like using a search engine? Should ChatGPT be cited as a source?If AI suggests ideas that I develop further, is the final product still “Mine”?If I copy/paste AI-generated text into my article, am I still the primary author? Or does the AI-based tool at some point take over for doing most of the work? At what percentage does the work become “Not mine”?What about ownership? Who owns the work — who retains the copyright? Does the AI tool own the work or does the human that prompted it retain the ownership?

Transparency isn’t just an ethical practice, it’s a question of intellectual honesty. In academia, plagiarism can have serious negative consequences. If we use AI to assist in our work and we copy/paste it into something and pass it off as if it is our own work, are we committing plagiarism? (See Is Using Artificial Intelligence Plagiarism? for more discussion on this).

What about ghostwriting? If someone else writes a book but you put your name on it, did you really write it? And are you ethically able to put your name on it? Isn’t that misleading?

Extending this same scenario to AI: if your use of AI significantly shaped the outcome of your work and you don’t disclose your use of AI, isn’t that equally as misleading? At what point does this become deception?

Misrepresentation and deception

Right now, some AI-generated content is still obvious — I see examples of obviously AI-generated content in my spam inbox daily. But as AI develops, what happens when we can no longer tell the difference? Have you ever been fooled by AI? Is it ok to be deceived?

At what point does AI augmentation become deception?

Without transparency, AI-created works could mislead audiences, erode credibility, and diminish the perceived value of human effort. Just as we cite sources in academic research or disclose conflicts of interest, disclosing AI’s role allows audiences to engage with content in a fully informed and ethical way.

Consider the rise of AI influencers — essentially fake personalities that comment on posts, promote brands, and “share” opinions. If AI-generated influencers start endorsing products, don’t you want to know what’s happening behind the scenes? I know I do.

Transparency is not just about ethics — it’s about intellectual integrity. The consequences of failing to disclose AI use are still unfolding; as far as I’ve seen, almost no-one is disclosing their use of AI, though I later found this piece about an AI transparency statement by Kester Brewin. Misrepresentation — whether intentional or not — could devalue both AI-generated and human-created content in addition to eroding trust. If anything, it’s certainly adding a lot of spam to my inbox!

By integrating disclosure into our workflows, we shape the norms around AI use now rather than reacting to them later. The AI Transparency Framework offers a practical approach to make this happen.

The AI Transparency Framework Illustrated

Applying the framework to content creation

I believe human/AI collaboration is a 1+1=3. We create better products faster by taking advantage of AI-based tools responsibly and ethically. The next step is to be more transparent about our use.

In order to demonstrate the application of the framework, let’s use the example of using AI for content creation. The level of AI involvement assigned will depend on the kind of content, the context, the goal, and most importantly, how AI was used (or not) throughout the process.

Before going deeper into the scale itself, there are a few other clarifications about the framework to take note of first.

Speed and efficiency

AI increases speed and efficiency as you move up the scale from Level 1 (no AI) to Level 5 (fully AI-driven). For example, in academia, Level 1 (fully human-driven) may currently be the only acceptable level of AI use, but this expectation will likely evolve, much like the once-rigid rules around using the internet for research.

Doing things “the old way,” the way we did it before the introduction of AI, will take longer. But it will have an authenticity and a humanness to it that at least for right now, AI has a difficult time achieving. At the same time, while AI is progressing rapidly, it will be less and less obvious to tell if something is AI-generated. This is where transparency about the use of AI can help us to better identify and acknowledge its use.

Appropriate use

Fully automated tools categorized at Level 5 are still valid — but I argue that the use of AI should be openly disclosed. It’s about being transparent about how AI-based tools are being used, not judging whether it’s an appropriate use.

Of course, there will be much discussion about “Appropriate use,” my hope is that this transparency framework will give us a set of tools to have that discussion in a more open and constructive way.

Visual representation

The scale is visually represented with a rainbow gradient, reflecting nuance in AI use. However, I do not encourage interpreting these colors as value judgments.

For instance, red typically signals danger, so I have chosen orange instead of red for Level 5 to help avoid this interpretation. Level 5 is not “Bad,” it’s just that orange better emphasizes the caution needed when fully outsourcing intellectual work to AI. And we should exercise caution here.

In contrast, green for Level 3, representing collaboration, highlights how human-AI partnerships can enhance creativity and output when done responsibly and ethically. This doesn’t mean green is “Good” or “the Best,” or even “Optimal,” it just refers to a balanced use of AI to support the process. An equilibrium.

Disclosing levels of AI involvement

Using content creation as the application of the framework, following is a discussion of each level along with benefits and risks.

Level 1: Fully human-driven; No use of AI

AI is not used in any capacity for the creation of this work. The benefits of Level 1 include complete originality and unassisted creativity. However, it may sacrifice efficiency and miss opportunities to gain deeper insights or broader perspectives that AI tools can provide. This could result in slower workflows or less effective solutions (How generative AI can boost highly skilled workers’ productivity).

My hunch is that in the future, this is the realm of pure artists, writers, poets, and academics (I know, even that word sounded like a judgement). Maybe we need some new terms here. Just as the internet sped up the access to information, I imagine that use of AI will take much of what currently happens here in Level 1 and it will move to AI-assisted Level 2 or Balanced use Level 3.

Level 2: Human-centric creation; AI supported research

Here, AI is used only for minor background research or reference checks, with humans retaining control of content, ideas, and critical thinking. This level allows AI to support human creativity passively, but over-reliance on AI research can lead to oversimplification, missed nuances, or a failure to leverage AI’s organizational strengths.

Level 3: Balanced collaboration between humans and AI

Level 3 represents a partnership between humans and AI that enhances efficiency while preserving originality.

AI supports research, organizes ideas, and clarifies content while humans drive critical thinking, creativity, and intent. Humans remain responsible for ethical oversight and ensuring transparency. Additional benefits include:

Creativity: AI supports innovation and creativity by organizing complex ideas and offering clarity, but humans retain control over originality and vision.Avoiding bias: Balances the power of AI with human judgment to avoid bias, misrepresentation, and ethical pitfalls.

Level 4: AI-driven content; Human oversight

AI takes on a significant role in research, development, and some critical thinking, with humans providing oversight through review and refinement.

This greatly increases efficiency and scalability, reducing the time required to perform certain tasks. However, an over-reliance on AI risks diminishing human creativity and contextual understanding. Limited oversight can also allow unverified biases or inaccuracies to slip through if not caught in a human review.

Currently, I see a lot of marketing automation that falls into this bucket and (in less-effective implementations), Level 5.

Level 5: Fully AI-driven; Minimal human oversight

AI did almost 100% of the work, from research to ideation, with minimal human input. While this maximizes speed and volume, allowing for rapid content generation or decision-making.

However, it raises concerns about originality, accountability, and trust. Outputs may perpetuate bias, inaccuracies, or ethical lapses, and presenting AI work as human-generated risks eroding integrity.

When you see something that’s obviously AI-generated, irrelevant, and tone deaf, perhaps you’re like me and you completely ignore it. Generating sales emails might allow you to spam millions of people, but the backlash is coming and authenticity will become more and more valued in the future.

Unintended implications

It’s important to acknowledge that being more transparent about our use of AI may have some unintended consequences.

Risk of normalizing over-reliance on AI

As AI-based tools evolve, it’s easy to start defaulting to their use. While they may help in many ways, it doesn’t replace the cognitive work of a human brain.

Just recently, my friend Sam Ladner shared a study showing a strong negative correlation between cognitive offloading and critical thinking.

Cognitive offloading, image from Sam Ladner; Full study

That’s right, over-reliance on AI can reduce our critical thinking!

We must not depend on AI-based tools, we must integrate them thoughtfully. AI should enhance our capabilities, not replace the cognitive effort required to come up with new ideas, develop intellection, and hone our craft.

While I discuss Level 3 as a collaboration and a balanced use of AI, I should also caution the habitual or “default” to using AI to replace our human cognitive thinking (or “Cognitive offloading, as the study calls it), even at a balanced use of Level 3.

If human-AI collaboration is seen as an ideal, we may end up disincentivizing people from pushing their own cognitive and creative boundaries. And then we all miss out on the beauty and brilliance that comes from deep work.

Risk of perception shifts

Transparency is not judgement in and of itself. However, providing additional transparency about the use of AI in the context of a work does not mean that audiences may still decide to judge the content differently due to the disclosure of the use of AI.

When we engage with content (or a product), we make assumptions about its origin and how we should interpret it. When we disclose that that content was produced with AI in some form, it begins to challenge traditional ideas of creativity and what a creative output is or should be like.

If you know an article was created with the assistance of AI, how would you perceive it? What if that article was fully-AI generated? What if it was fully-human generated? How does your response change in each of these scenarios?

Some potential responses include:

Resetting expectations — of what the content is and should be doing… or just how we should approach interpreting it in the first place.Engage differently — How might we read or engage with it differently because we know that AI was involved? Are we more or less critical of the article that was human-driven or the one that was AI-driven?Devaluing the article more based on more use of AI — for example, Level 1 articles are inherently better or more valuable than Level 5 articles.Eroding trust — if the article used the assistance of AI, can it still be trusted or should we be more critical because AI tends to oversimplify and introduce bias?

There’s an interesting parallel with food labeling: if you know that a product was genetically modified, does that change your perception of the product? Do you make a different purchasing decision? Is it better to know that the product is genetically modified (GMO)? Should we label genetically modified food? How should it be labeled?

By using food labeling as a parallel, perhaps you’re thinking, “Of course we should label genetically modified food!”

Are you also thinking, “Of course we should label AI-assisted involvement in our products and services!” as well?

Your response to these two questions is worth reflecting on.

Additional applications of the framework

This framework applies beyond content creation. It can be applied across other disciplines and industries to guide transparent AI use in.

In the examples below, I’ll define Levels 1, 3, and 5 separately, however, in each case:

Level 2 can be defined as “Mostly human with some AI.”Level 3 is a balanced integration of the two ends of the scaleLevel 4 as “Mostly AI with some human oversight.”

Product development

Level 1: Manual prototyping without AI involvement.Level 3: AI assists in analyzing user feedback or generating design variants, with humans making final decisions.Level 5: The entire design process is AI-automated, with minimal human input.Reflection: Are we using AI to complement human creativity, or automating decisions that require empathy and context?

UX Research

Level 1: Data collection and analysis are entirely manual and human-driven. AI was not used to assist in the process at all.Level 3: AI identifies trends, but researchers interpret findings to ensure meaningful insights.Level 5: AI conducts research and analysis without human involvement, risking oversimplification of user experiences.Reflection: Is AI helping uncover meaningful insights, or oversimplifying complex user experiences?

Decision-making

Level 1: Decisions rely solely on human expertise.Level 3: AI provides insights or simulations, but humans retain decision-making authority.Level 5: Decisions are fully automated by AI, potentially compromising empathy and ethical judgment.Reflection: Are we using AI insights to enhance decisions, or surrendering judgment to the algorithm?

Hiring & HR decisions

Should AI-assisted hiring disclose when a résumé was screened by an algorithm? Are AI-assisted screening processes biased?By contrast, should candidates disclose if AI helped them write their application? Is a grammar check the same as using a LLM?

Legal & Healthcare fields

Should doctors and lawyers disclose AI-assisted decisions?If an AI-assisted medical diagnosis is given, should patients know exactly what role AI played?

From hiring decisions to legal opinions, AI-driven processes are changing industries. Applying this framework in fields like HR, healthcare, and law can ensure informed decision-making and ethical accountability.

It is my hope that we can build out our thinking about transparent use of AI together over time and that the Transparency Framework for the levels of AI involvement can become part of a new discourse on our intentional use of AI.

Encouraging reflection

Finally, the framework can also be used to reflect on one or many of the following areas:

General reflection: Does this level of AI use enhance or compromise the quality and integrity of the work?Intentionality: Am I being intentional about my use of AI or am I defaulting to a certain level of use?Creativity: Is AI amplifying creativity or stifling originality?Learning and growth: How does the use of AI impact my skill development and craftsmanship?Inclusivity and bias: Is AI helping to amplify diverse perspectives, or reinforcing harmful generalizations?Accountability: Can I clearly articulate which parts of the work were AI-generated versus human-driven?Human flourishing: Does the use of AI affect my personal sense of purpose, fulfillment, and meaning?

The goal of transparency

AI-based tools are already shaping our present reality. The question is not whether we use AI, but how we disclose, govern, and apply it responsibly.

Transparency is not about restriction — it is about trust. By adopting AI transparency as a standard, we shape the ethical and philosophical foundation of how AI integrates into our work, our industries, and our daily lives. This is our opportunity to lead, before regulation forces its own outcome.

This approach doesn’t aim to dictate a “correct” level of AI involvement but instead encourages openness about the choices made in the creative and decision-making process.

If your reflection raises concerns, reconsider whether AI is being used responsibly and intentionally… and respond accordingly.

As we evolve with AI, our goal should always be to ensure it works for us, enhances human flourishing, and promotes purpose, creativity, and equity for all.

Full circle with full disclosure

I believe this article represents AI Transparency Level 3. The framework is wholly mine, but I used ChatGPT to refine some wording and perform some background research to think through the implications and ensure I haven’t missed anything important. I also had feedback and help from my colleague Katie Trocin.

Learn more

This article doesn’t exist in isolation but is the natural outcome of two other recently published articles:

Human flourishing in the age of AI — if you are looking for reference articles and sources cited, this is the place to start. With an extensive appendix and sources cited throughout, this article is a treasure trove of references and a synthesis of current thinking on human flourishing and AI.The human-centered AI manifesto — the manifesto is based on the initial human flourishing article as a summary of principles of human-centered AI and commitments to action. If you are so moved, there’s a link to a change.org petition you can sign to show your support and stand up for the responsible design and development of human-centered AI-based products and services.

Again a huge thank you to my colleague Katie Trocin for the lit review, feedback, and discussion that improved this article.

Josh LaMar is the Co-Founder and CEO of Amplinate, an international agency focusing on cross-cultural Research & Design. As the Chief Strategy Officer of JoshLaMar Consult, he helps Entrepreneurs grow their business through ethical competitive advantage.

AI transparency framework was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Leave a Comment