Researchers have developed an innovative method to use AI chatbots like ChatGPT to carry encrypted messages that are invisible to cybersecurity systems. This technique, called EmbedderLLM, inserts secret messages into specific areas of AI-generated text, making it appear as normal human-written content.
The EmbedderLLM system uses an algorithm to map secret data to high-probability token positions in the model's output, generating coherent sentences that conceal the encrypted message. The recipient can then use a private key to recover the hidden bits accurately, making it possible to retrieve the original message.
This technology has potential applications for secure communication, particularly in situations where freedom of speech is restricted. For instance, journalists and citizens living under oppressive regimes could use this method to communicate secretly, bypassing speech restrictions.
However, the dual-use nature of this technology poses a risk of being exploited by malicious actors, such as criminals or spies, to exfiltrate data or plan illicit activities without detection. As a result, the research community will need to address the security and ethical implications of this technology, ensuring it is used responsibly.
The development of EmbedderLLM highlights the ongoing cat-and-mouse game between cybersecurity and cryptography, where new technologies and techniques are continually being developed to stay ahead of potential threats. As AI continues to evolve, it is likely that new methods for secure communication will emerge, along with new challenges for cybersecurity systems.