A new report reveals that some companies are secretly manipulating enterprise AI chatbots by embedding hidden instructions within “Summarize with AI” buttons on websites and in apps. While these summary features are intended to give users quick overviews of content, researchers at Microsoft found that the underlying code in certain implementations also injects biased prompts into an AI assistant’s memory, influencing how the chatbot responds to future queries in subtle but persistent ways.
This technique, known as AI recommendation poisoning, works by taking advantage of how many enterprise chatbots and AI assistants remember user preferences in order to personalise interactions. When a user clicks a summary button — often without realising anything beyond a syntactic overview is happening — the hidden prompt can tell the AI to favour a company’s products or viewpoints later. Unlike traditional prompt injection, this contamination persists across future conversations, making the influence harder to detect.
Microsoft’s research identified dozens of real‑world examples of this tactic deployed by companies in sectors like finance, legal services, healthcare, SaaS, and business services over a two‑month period, underscoring how widespread the practice has become. The method can be used not only for product marketing but also, potentially, to push falsehoods, biased advice, or commercial disinformation if left unchecked — raising concerns about trust and authenticity in enterprise AI use.
To defend against such manipulation, organisations and users are advised to treat AI assistant tools with the same security caution as they would suspicious downloads or links, and to audit an AI assistant’s stored preferences for anomalous entries. Enterprise administrators can also proactively scan for suspicious URL patterns in AI‑triggering links to prevent poisoned prompts from influencing AI behaviour.