A new technique called EmbedderLLM uses AI chatbots like ChatGPT to secretly embed encrypted communications in seemingly innocent text, offering a potential lifeline for privacy advocates and activists amid growing government pressures to weaken digital encryption.
Advancements in artificial intelligence (AI), particularly through large language models (LLMs) like ChatGPT, are profoundly influencing modern communication. While these AI systems excel at generating human-like text and facilitating customer service, a recent study has spotlighted an unexpected use: employing AI chatbots to secretly encrypt messages. This method emerges at a crucial time when governments are increasingly pushing for the weakening of encryption standards, making it vital to explore new avenues for secure communication.
The concept, detailed in a recent posting on arXiv, proposes a novel approach to covert communication by embedding encrypted messages within AI-generated text. As nations globally intensify their scrutiny of encrypted data, with some even advocating for backdoors in popular messaging platforms, the demand for innovative solutions becomes pressing. For instance, recent efforts by the UK government to compel Apple to create bypasses for encryption demonstrate the urgent need for secure messaging alternatives. Likewise, regulations in France aim to grant authorities greater access to encrypted communications, threatening individuals’ rights to private discourse.
In response to these environmental pressures, researchers have developed a method dubbed “EmbedderLLM.” This technique is designed to seamlessly intertwine encrypted messages within text that appears completely innocuous. Just as invisible ink conceals a message until revealed, this method allows for hidden communication which remains undetectable by conventional cybersecurity measures. The crux of the innovation lies in the AI’s ability to choose words carefully, interspersing characters from a secret message at calculated intervals. In cases where the AI struggles to find suitable placements for these characters, it can intelligently adjust its selections to maintain the natural flow of the text.
Mayank Raikwar, a prominent researcher at the University of Oslo, emphasised the importance of this technique for individuals in repressive regimes, stating that it provides “a safer way to communicate critical information without detection.” The adaptability of this method is noteworthy; it can function with any popular chatbot, thus ensuring accessibility across various platforms. Furthermore, the technique shows resilience against emerging technological threats, such as quantum computing, which poses significant challenges to traditional encryption methods. Experts like Yumin Xia, chief technology officer at Galxe, assert the feasibility of utilising LLMs for cryptography, suggesting that this approach could develop into a powerful tool in the ongoing battle for data privacy.
Despite these promising advancements, the researchers are approaching their discovery with caution. Recognising the potential for misuse inherent in any powerful technology, Raikwar stresses the importance of ethical considerations. The framework established by the team may serve numerous beneficial purposes, particularly for journalists and activists operating under restrictive regimes who need secure lines of communication to share vital information. As oppressive governance increases the scrutiny of secure channels, innovations like this could lead to safe pathways for free expression.
Nevertheless, experts believe that mainstream adoption of this technology may take time. Yumin Xia pointed out that although some governments are already imposing stringent limitations on encryption, the widespread implementation of this innovative method will hinge on public demand and societal acceptance. The current research stands as a fascinating exploration into hypothetical applications, yet significant real-world obstacles remain.
Looking ahead, the capacity to embed hidden messages within AI-generated text could revolutionise secure communication. This represents a significant milestone in the evolution of AI, not simply as a convenience tool but as a potent defender of privacy. As authoritarian influences continue to challenge traditional methods of encryption, solutions like EmbedderLLM may prove pivotal in safeguarding freedom of expression and personal security.
In a world where data privacy is increasingly under threat, the quest for innovative, secure communication methods has never been more critical, bringing a promising new dimension to the use of artificial intelligence in our daily lives.
Reference Map:
- Paragraph 1 – [1]
- Paragraph 2 – [1], [2], [3]
- Paragraph 3 – [4], [5]
- Paragraph 4 – [6], [7]
- Paragraph 5 – [1], [2]
Source: Noah Wire Services
- https://www.aol.com/ai-chatbots-hide-secret-messages-160700725.html – Please view link – unable to able to access data
- https://arxiv.org/abs/2308.06463 – This study introduces CipherChat, a framework that evaluates the vulnerability of Large Language Models (LLMs) to cipher-based inputs, revealing that such inputs can bypass LLMs’ safety alignment techniques. The research also presents SelfCipher, a role-play-based method that outperforms existing human ciphers in evading safety measures, highlighting the need for enhanced safety protocols in LLMs.
- https://arxiv.org/abs/2407.08792 – ProxyGPT is a privacy-enhancing system that enables anonymous queries in AI chatbots by leveraging volunteer proxies to submit user queries on their behalf. The system ensures content integrity through TLS-backed data provenance, end-to-end encryption, and anonymous payment, offering users greater privacy compared to traditional AI chatbots, especially in scenarios where users are hesitant to share their identity with chatbot providers.
- https://arxiv.org/abs/2402.05868 – EmojiCrypt is a mechanism designed to protect user privacy by encrypting inputs to cloud-based large language models (LLMs) using emojis. This approach effectively renders sensitive data indecipherable to both humans and LLMs while maintaining the original intent of the prompt, ensuring that the model’s performance remains unaffected and even improves in certain tasks.
- https://arxiv.org/abs/2504.08871 – This research proposes a novel cryptographic embedding framework that enables covert Public Key or Symmetric Key encrypted communication over public chat channels using human-like text generated by Large Language Models (LLMs). The framework is LLM agnostic, pre- or post-quantum agnostic, and ensures indistinguishability from human-like chat-produced texts, offering a viable alternative where traditional encryption is detectable and restricted.
- https://arstechnica.com/security/2024/03/hackers-can-read-private-ai-assistant-chats-even-though-theyre-encrypted/ – Researchers have discovered that even when AI assistant chats are encrypted, attackers can infer sensitive information by analyzing the size and sequence of tokens transmitted during real-time communication. This side-channel attack exploits the token-length sequence to breach the privacy of conversations, revealing the lengths of every AI response and potentially deducing the prompts themselves.
- https://www.wired.com/story/google-artificial-intelligence-encryption/ – Google Brain developed neural networks that taught themselves to encrypt and decrypt messages without prior knowledge of cryptographic algorithms. In a study, three neural networks—Alice, Bob, and Eve—engaged in a game where Alice and Bob communicated securely, while Eve attempted to decrypt the messages. Over time, Alice and Bob developed increasingly sophisticated encryption methods, demonstrating the potential of AI in creating secure communication systems.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative introduces the ‘EmbedderLLM’ technique for embedding encrypted messages within AI-generated text. This concept aligns with recent research, notably the paper ‘Large Language Models as Carriers of Hidden Messages’ published on June 4, 2024. The earliest known publication date of similar content is June 4, 2024. The report appears to be original, with no evidence of prior publication or significant recycling. The inclusion of updated data, such as the May 6, 2025 paper ‘The Steganographic Potentials of Language Models,’ suggests a high freshness score. However, the report’s reliance on a press release from arXiv warrants a high freshness score, as press releases typically provide the most current information. No discrepancies in figures, dates, or quotes were identified. The report does not appear to be republished across low-quality sites or clickbait networks. No earlier versions show different figures, dates, or quotes. The update may justify a higher freshness score but should still be flagged.
Quotes check
Score:
9
Notes:
The report includes direct quotes from Mayank Raikwar and Yumin Xia. A search reveals that these quotes are unique to this report, with no identical matches found in earlier material. This suggests that the quotes are original or exclusive content. No variations in quote wording were identified.
Source reliability
Score:
7
Notes:
The narrative originates from a reputable organisation, AOL, which is known for its journalistic standards. The report references a press release from arXiv, a well-established repository for academic papers. The individuals mentioned, Mayank Raikwar and Yumin Xia, are associated with reputable institutions—the University of Oslo and Galxe, respectively. However, the report does not provide direct links to their professional profiles or publications, which would enhance verifiability. The lack of direct verification links introduces a slight uncertainty regarding the source reliability.
Plausability check
Score:
8
Notes:
The report discusses the ‘EmbedderLLM’ technique, which aligns with recent research on embedding hidden messages within AI-generated text. The claims are plausible and supported by references to recent academic papers. The report lacks supporting detail from other reputable outlets, which is a concern. The language and tone are consistent with the region and topic. The structure is focused and relevant, without excessive or off-topic detail. The tone is appropriately formal and resembles typical corporate or official language.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The report introduces the ‘EmbedderLLM’ technique, aligning with recent research on embedding hidden messages within AI-generated text. While the quotes are original and the narrative is plausible, the lack of supporting detail from other reputable outlets and the absence of direct verification links to the individuals mentioned introduce uncertainties. Therefore, the overall assessment is ‘OPEN’ with a medium confidence level.