Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Meta Announces ‘Video Seal’ Open-Source Tool to Watermark AI-Generated Videos

Share

Meta is releasing a new tool that can add an invisible watermark to videos generated using artificial intelligence (AI). Dubbed Video Seal, the new tool joins the company’s existing watermarking tools, Audio Seal and Watermark Anything. The company suggested that the tool will be open-sourced, however, it has yet to publish the code. Interestingly, the company claims that the watermarking technique will not affect the video quality, yet will be resilient against the common methods of removing them from videos.

Deepfakes have flooded the Internet ever since the rise of generative AI. Deepfakes are synthetic content, usually generated using AI, that shows false and misleading objects, people, or scenarios. Such content is often used to spread misinformation about a public figure, create fake sexual content, or carry out fraud and scams.

Additionally, as AI systems get better, deepfake content will become harder to recognise, making it even more difficult to differentiate from real content. According to a McAfee survey, 70 percent of people already feel that they are not confident in telling the difference between a real voice and an AI-generated voice.

As per internal data by Sumsub, deepfake frauds increased by 1,740 percent in North America and by 1,530 percent in the Asia-Pacific region in 2022. The number was found to increase tenfold between 2022 and 2023.

As the concerns about deepfakes rise, many companies developing AI models have started releasing watermarking tools that can identify synthetic content from real ones. Earlier this year, Google released SynthID to watermark AI-generated text and videos. Microsoft has also released similar tools. In addition, the Coalition for Content Provenance and Authenticity (C2PA) is also working on new standards to identify AI-generated content.

Now, Meta has released its own Video Seal tool to watermark AI videos. The researchers highlight that the tool can watermark every frame of a video with an imperceptible tag that cannot be tampered with. It is said to be resilient against techniques such as blurring, cropping, and compression software. However, despite adding the watermark, the researchers claim that the quality of the video will not be compromised.

Meta has announced that Video Seal will be open-sourced under a permissive licence, however, it is yet to release the tool and its codebase in the public domain.