Disclosing AI Use Can Backfire, Research Shows

Disclosing AI Use Can Backfire, Research Shows

Transparency is usually key to building trust, but when it comes to using generative artificial intelligence, being honest about its use can actually make people trust you less. According to research from the University of Arizona's Eller College of Management, disclosing AI use can lead to a significant drop in trust across various tasks and evaluator groups.

This trust erosion persists even among individuals familiar with AI and frequent users. Studies involving over 5,000 participants showed that revealing AI use resulted in decreased trust, with specific examples including:

  • A 16% decline in trust when students learned a professor used AI for grading purposes
  • An 18% decline in trust when investors were informed about AI use in ads
  • A 20% decline in trust when clients discovered graphic designers used AI

The researchers suggest that the negative impact on trust may be due to the perception that AI use is a shortcut, undermining the value of human effort. Even gentle language or assurances of human oversight didn't mitigate the trust decline. Interestingly, getting "caught" using AI without disclosure can lead to an even greater decline in trust than voluntary disclosure.

As AI becomes more widespread, organizations must carefully consider whether to enforce disclosure, leave it optional, or foster a culture where using AI is normalized and accepted. Cultivating a workplace environment where AI use is seen as normal and legitimate may help mitigate trust erosion.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.