US homeowners in disaster-prone states face soaring insurance costs
- today, 6:06 AM
- theguardian.com
- 0
In October, Meta unveiled Movie Gen, its latest generative AI model that can create realistic video clips from simple user prompts. Movie Gen is Meta’s third major venture into generative AI, following its Make-A-Scene and Llama AI image models. With this new tool, users can describe the scene they want to create and generate videos of up to 16 seconds and audio clips of up to 45 seconds long.
Meta founder and CEO Mark Zuckerberg showcased the model’s capabilities with an intriguing video of himself leg-pressing chicken nuggets, and working out in a neon-lit gym. “Every day is leg day with Meta’s new MovieGen AI model that can create and edit videos. Coming to Instagram next year,” said Zuckerberg, in an Instagram post.The company has been investing millions into creating an AI-powered ecosystem. Speaking on the Joe Rogan Experience podcast, Mark Zuckerberg recently revealed that Meta plans to replace mid-level software engineers with AI by 2025, marking a significant shift in the company’s approach to software development. He also announced that Meta will transition from its third-party fact-checking system to a community notes model, inspired by the approach used on Elon Musk’s X platform.
However, not every AI initiative by Meta has been well received by the public. The company recently removed AI-generated character accounts from Facebook and Instagram following significant backlash. Critics described the accounts as “creepy” and “unnecessary,” with many users disturbed by their lifelike nature and potential to spread misinformation or facilitate harmful interactions. Meta, however, attributed the removal to a technical issue that made it difficult for users to block the accounts. The company clarified that these AI characters were part of an experimental phase to test the integration of AI-generated profiles on its platforms. (Meta declined to comment for this story.)
Meta describes Movie Gen as a collection of artificial intelligence models, the largest of which is the 30B parameter text-to-video model. The model can generate lifelike videos and videos with synchronized audio, offering a complete multimedia experience. The development comes at a time when the demand for AI-generated dynamic image and video content is skyrocketing. But Meta isn’t the only player in the video generative AI space. OpenAI’s Sora and Google’s Veo, currently in development, promise their unique features and applications for video creation.
For instance, OpenAI’s Sora can generate videos up to a minute long, a significant leap from Movie Gen’s 16-second limit. Likewise, Google’s Veo video offers nuanced creative control. The AI can create high-resolution videos with cinematic effects, like time lapses or aerial shots of a landscape. While Sora isn’t available to the public yet, Google’s Veo has already been introduced to select creators.
Democratizing Human Creativity or Diluting Art?
In a research paper published on the company’s site, Meta asserts that Movie Gen outperforms rivals including OpenAI’s Sora, Runway Gen 3, and Chinese AI-video model Kling in terms of overall video quality, video consistency, motion naturalness, and realness.
“Movie Gen outperforms similar models in the industry across tasks (image, audio, video, and 3D animation) when evaluated by humans,” Meta said in a blog post. “Positive net win rates correspond to humans preferring the results of our model against competing industry models.”
?While Meta is positioning Movie Gen as a tool to democratize video production, allowing those without traditional skills or resources to express themselves, the development may lead to an oversaturation of low-quality AI-generated content. Over the past few years, there’s growing concern that AI-generated videos might overshadow human creativity, making it harder to preserve the unique artistic visions that come from individuals. Filmmakers, photographers, and artists are particularly worried that the rise of generative AI tools could impact their livelihoods.
“With or without AI—I don’t think the viewer cares,” says Sanket Shah, founder and CEO of AI-powered video creation platform Invideo. “AI is just another tool at our disposal now, which can help creators extract value faster, without having deep resources. Within the next two years, there will be an AI workflow or use of AI in the majority of videos created in the world.”(In a blog post, Meta said that Movie Gen is not a replacement for human creators but a tool to enhance their creativity.)
And while Hollywood writers triumphed in 2023 in their long-fought battle to impose more guardrails on AI’s use in the entertainment industry, many saw that achievement as a stop-gap—not a full resolution.
Meta’s Strategy to Combat AI-Generated Misinformation
According to Forrester’s 2024 State of Generative AI Inside US Agencies report, 83% of U.S.-based agency executives are worried about legal issues, like copyright infringement, when using AI-generated content. Meta has been cautious about the potential misuse of AI-generated content. The company says it has implemented safeguards to address security concerns, including adding visible and invisible watermarks to AI videos generated through Movie Gen. Meta said these measures aim to prevent the spread of misleading or harmful content, a critical issue given the rise of deepfakes and AI-driven misinformation.
Other AI providers too are taking steps to reduce the risk of legal exposure from the data that trains diffusion models and the content generated by AI tools. For instance, Microsoft has expanded its indemnity policy for its Copilot products, Getty Images offers “uncapped indemnification” for customers using its generative AI solution, and Google’s Vertex AI Studio provides indemnity for both the training data and the generated outputs.
However, Meta has been somewhat vague about the data used to train Movie Gen, only stating that the model was trained on a mix of “licensed and publicly available datasets.” The company hasn’t fully disclosed the sources or provided more details, which hints it might have used millions of web-scraped Instagram and Facebook videos.
“AI-generated content has gone beyond what we can comprehend as real or imitation or parody. What’s clear is that there is AI-generated content being used to be intentionally deceptive. At a minimum, we need to have measures and systems in place that provide context on AI-generated videos, images and its data sources,” says Kevin Guo, CEO and co-founder of generative AI content detection platform Hive.
Meta’s approach underscores a broader industry challenge: balancing AI innovation with ethical and legal accountability. As the AI arms race accelerates, the success of future AI initiatives seems to depend on enforcing transparency around training data and safeguards—both critical to earning public trust. Ultimately, tech leaders must ensure these advancements act as a force for good, rather than a source of harm.
No comments