In a bid to address copyright concerns among creators, OpenAI is trialling watermarked images for free users of its latest ChatGPT model, ChatGPT 4o, amidst rising scrutiny of AI-generated content.
OpenAI is currently trialling the introduction of watermarks on images generated by its latest ChatGPT model, referred to as ChatGPT 4o. This testing phase has been identified within the latest beta version of the ChatGPT Android application, raising concerns particularly for users of the free plan.
Reports indicated that the beta version 1.2025.0912509108 includes code that specifies “image-gen-watermark-for-free”, suggesting that those on the free plan will soon see their generated images come with a watermark. This change is perceived as a response to the widespread use of ChatGPT 4o’s impressive image-generation features, which have garnered significant attention in recent weeks, particularly for generating images reminiscent of the renowned Studio Ghibli animation style.
As noted by Tech Radar, this is not the first time the ethical implications of AI-generated content have been brought into question. The viral trend of producing Studio Ghibli-like imagery without watermarks has led to considerable concern among creators regarding copyright issues. The implementation of watermarks for free users appears to be a proactive measure by OpenAI to address these concerns.
Users opting for the paid ChatGPT Plus subscription, priced at approximately $20 (or £16) per month, are expected to retain full access to the image-generation capabilities without such watermarks, along with avoiding limits on the number of images they can create daily. This development may attract more users to the subscription model as the differentiation between free and paid services becomes more pronounced.
While the specifics of what the watermarks will look like remain undisclosed, there is speculation that their size and visibility will influence user perception and acceptability. Tech Radar’s reporting suggests that, depending on the nature of the watermark, it could play a crucial role in regulating AI-generated images circulating on social media platforms.
Moreover, the recent changes in the ChatGPT platform encompass additional enhancements, such as a referral program for Colombian students associated with Universidad Nacional de Colombia and updates to features related to shared posts within the application. As the situation develops, the gaming, creative, and wider digital communities will be monitoring OpenAI’s moves closely in light of increasing scrutiny regarding the use of AI in artistic fields.
Source: Noah Wire Services
- https://www.bleepingcomputer.com/news/artificial-intelligence/openai-tests-watermarking-for-chatgpt-4o-image-generation-model/ – This article supports the claim that OpenAI is testing watermarks for images generated by its ChatGPT 4o model, particularly for free users. It notes that users can generate realistic visuals, such as Studio Ghibli-style images, but may soon see watermarks on these images if they are not ChatGPT Plus subscribers.
- https://www.indiatoday.in/technology/news/story/openai-may-soon-add-watermarks-to-chatgpt-image-generation-model-but-heres-a-catch-2705126-2025-04-07 – This article corroborates that OpenAI is testing watermarks on images generated by the ChatGPT 4o model, specifically mentioning that this could impact free-tier users. It also highlights the significant adoption of this feature and the potential for users to upgrade to a paid subscription to avoid watermarks.
- https://www.360om.agency/news-insights/openai-rolls-out-gpt-4o-powered-image-generation-in-chatgpt-everything-you-need-to-know – This article provides information on the capabilities of OpenAI’s GPT-4o model, which powers the image generation feature in ChatGPT. It highlights the model’s ability to create photorealistic images and notes the temporary limits imposed on free-tier users due to high demand.
- https://pmc.ncbi.nlm.nih.gov/articles/PMC10311201/ – Although unrelated to OpenAI’s watermarking, this article provides broader context on the increasing importance of digital evidence, which indirectly shows how digital content, including AI-generated images, is becoming more central in various contexts.
- https://www.coloradojudicial.gov/sites/default/files/2024-06/COLJI-Crim%202017%20-%20Final.pdf – This legal document does not directly relate to OpenAI’s watermarking but indicates the role of digital technologies in legal contexts, suggesting how digital modifications like watermarks could impact legal frameworks surrounding intellectual property.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative seems current, focusing on recent developments in ChatGPT’s beta version and its implications. However, without specific dates or recent press releases confirming the watermarks, the freshness cannot be fully verified.
Quotes check
Score:
10
Notes:
There are no direct quotes in the narrative, which means there’s no risk of quote misattribution or recycling.
Source reliability
Score:
8
Notes:
The narrative originates from Tech Radar, a reputable technology news outlet. However, the lack of diverse sources or corroboration from other major news outlets slightly reduces the reliability score.
Plausability check
Score:
9
Notes:
The claims about OpenAI testing watermarks on images are plausible, given the ethical and copyright concerns surrounding AI-generated content. However, without official confirmation from OpenAI, some aspects remain speculative.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The narrative appears well-informed and addresses current issues with AI-generated images. However, the lack of explicit confirmation from OpenAI and limited corroboration from other sources means that while plausible, the story cannot be fully verified without additional information.