AI Creative Tools Go Agentic — Four Platforms Ship Production Agents in March
In March 2026, Adobe, Luma AI, Google, and Moments Lab each shipped AI agents designed to orchestrate full creative productions rather than generate single assets. The launches arrived alongside unresolved copyright questions — the US Supreme Court let stand a ruling that purely AI-generated work cannot be copyrighted, and the UK scrapped its planned AI copyright exception.
Written by
Stephanie GoodmanLast updated: May 5, 2026
AI Creative Tools Go Agentic as Four Platforms Launch in March
On March 5, Luma AI launched Luma Agents — an AI system that coordinates text, image, video, and audio generation inside a single workflow. Rather than generating a single asset from a prompt, the system plans creative briefs, selects the right model for each task, generates outputs, evaluates them against the original brief, and iterates. Publicis Groupe and Serviceplan Group are already deploying it across strategy, creative development, and production.
Within three weeks, Adobe, Google, and Moments Lab made parallel moves. Each shipped AI creative tools designed to orchestrate full productions across media types. The shift from single-prompt generators to multi-step production agents arrived fast, and from several directions at once.
Four Agentic Launches, Three Weeks
Luma Agents runs on the company's Unified Intelligence architecture, powered by its Uni-1 model. It coordinates across Ray 3.14, Google Veo 3, ByteDance Seedream, and ElevenLabs voice models, maintaining persistent context from initial brief through final delivery with built-in self-critique and iteration. Available via API, it plugs into existing production pipelines rather than replacing them.
Adobe made two related announcements in the same week. On March 16, the company revealed a strategic partnership with NVIDIA aimed at building agentic creative workflows. The partnership integrates NVIDIA's Agent Toolkit and Nemotron with Adobe's suite — Photoshop, Premiere Pro, Express, Frame.io, and GenStudio. Adobe CEO Shantanu Narayen described the collaboration as an effort to "reinvent creative and marketing workflows with the power of AI." NVIDIA CEO Jensen Huang framed it as the latest chapter in a two-decade partnership "to push the boundaries of design and creativity."
Three days later, Adobe expanded Firefly with over thirty integrated models, including Runway Gen-4.5, Google Veo 3.1, and Kling 2.5 Turbo. The company also opened public beta for custom models that let studios train Firefly on their own visual styles — preserving stroke weight, color palettes, lighting, and character features from proprietary assets. Custom models stay private by default; the studio retains full ownership. The most significant development, though, was the expansion of Project Moonlight, a conversational AI assistant now in private beta. Moonlight accepts natural-language descriptions of creative goals and executes across applications. Its Quick Cut feature converts raw footage into structured first cuts within minutes. Where Firefly generates individual assets, Moonlight coordinates them into production workflows.
Google launched Lyria 3 Pro on March 25, a music generation model that produces tracks up to three minutes long — a meaningful step from the thirty-second clips of its predecessor. The model understands track structure: intros, verses, choruses, bridges. All outputs carry SynthID watermarks, and Google trained the model exclusively on partner and permissible data. AI music production has historically been limited to loops and short fragments; Lyria 3 Pro is the first major-platform model to generate structured, full-length tracks. It's available across Gemini, Google Vids, ProducerAI, and Vertex AI.
Moments Lab, headquartered outside Paris, announced its Discovery Agent ahead of the NAB Show in April. The agent lets editors search entire video libraries through conversational prompts, returning relevant clips, quotes, and specific moments. Native panels for Avid Media Composer and DaVinci Resolve allow editors to search and preview AI-indexed footage directly inside their editing software, eliminating manual EDL workflows. CEO Phil Petitpont called the tool "life-changing" for creative teams, citing rapid adoption. The company also announced an agentic ecosystem built on Agent-to-Agent and Model Context Protocol standards, connecting its platform to broader AI infrastructure.
Amazon MGM Studios entered the space in early March with a closed beta for AI film and TV production tools. The focus is narrower — improving character consistency across shots and supporting pre-production — rather than orchestrating full workflows. Creative advisor Albert Cheng emphasized the goal is "to support creative teams, not to replace them." Amazon expects initial results by May.
What "Agentic" Means in Production
The word is doing specific work. For three years, AI creative tools operated as single-task generators: prompt in, asset out. Each output required a human producer to coordinate — selecting models, managing handoffs between applications, evaluating quality, deciding what needed another pass.
The March launches represent a different architecture. Luma Agents takes a creative brief, decomposes it into subtasks, selects the appropriate model for each step, generates outputs, reviews them against the brief, and refines. Adobe's Project Moonlight follows a similar pattern within Adobe's ecosystem, accepting conversational descriptions and executing across applications. The human stays in the loop for review and approval, but sequencing and tool selection happen inside the agent.
The difference shows most clearly in a practical scenario. A production team creating a commercial might currently use one tool for concept images, another for video generation, a third for voiceover, and an editor for assembly — four applications, four handoffs, each requiring a human decision about what to pass forward and in what format. An agentic system takes the brief, generates concepts across modalities, sequences the outputs, and presents a draft for review. The producer evaluates a near-complete artifact rather than assembling one from components.
For AI video production and AI content production workflows, this shifts the procurement decision from model quality to workflow architecture — which orchestration system should coordinate the growing roster of specialized AI models a production depends on.
AgentPMT's Workflow Builder operates in this territory. Where Luma and Adobe each orchestrate within their own ecosystems, AgentPMT provides cross-platform orchestration with budget controls, audit trails, and credential management — governance capabilities that creative-focused platforms have yet to ship.
The Copyright Vacuum
These agent systems are shipping into a legal environment that has not resolved who owns what they produce.
On March 2, the US Supreme Court declined to hear Thaler v. Perlmutter. The DC Circuit's ruling stands: copyright law "protects only works of human creation." The Copyright Office maintains that it "will refuse to register a claim if it determines that a human being did not create the work."
This does not block AI-assisted work from receiving copyright protection. Hundreds of works have been registered where human authors demonstrate meaningful creative contribution. But as agents take increasingly autonomous roles in production — planning, executing, iterating, and delivering with minimal human intervention — the distinction between "AI-assisted" and "AI-generated" becomes harder to draw. An agent that orchestrates an entire creative brief from concept through final cut pushes directly against the boundary the court reinforced.
In the UK, the government published its Copyright and Artificial Intelligence report on March 18 and reversed direction. The text-and-data-mining exception it had previously favored was scrapped after broad opposition. The government opted to maintain existing law and monitor litigation rather than legislate. The House of Lords' Communications and Digital Committee, in a March 6 report, was less patient, warning that UK creative industries face a "clear and present danger" from unlicensed AI training on copyrighted works and calling for mandatory disclosure of training data sources.
The licensing market is forming alongside the litigation. Warner Music Group settled its copyright lawsuit against AI music platform Suno in November 2025 and struck a partnership for licensed music generation. Warner CEO Robert Kyncl called the deal "a victory for the creative community." Irving Azoff, founder of the Music Artists Coalition, read it differently: "We've seen this before — everyone talks about 'partnership,' but artists end up on the sidelines with scraps." Under the deal, Suno will train higher-quality music models on licensed Warner recordings, though whether individual artists can opt out of inclusion in that training data remains unclear.
For studios deploying AI agents in AI music production, AI animation tools, or VFX AI tools, the operational takeaway is direct: every agent action that contributes to a creative work needs documented evidence of human involvement to support copyright claims. Audit infrastructure is now a production requirement.
Start automating your workflows today.
Build your first agent in 60 seconds.
No credit card required.
Creative Professionals Are Drawing Lines
The GDC 2026 State of the Industry report, based on responses from several thousand game professionals, found that 52% believe generative AI has a negative impact on the industry — a sharp increase from the previous year. Visual and technical artists were the most negative, followed closely by game designers and narrative professionals.
The usage data sharpens the picture. Among game professionals who use generative AI, 81% apply it to research and brainstorming. Only 19% use it for asset generation. Creators are adopting the tools for peripheral work while drawing firm boundaries around what they consider core to their craft. As one senior developer put it: "Why would I replace human creativity with a regurgitated amalgamation of everything that's come before?"
Agentic systems intensify this friction. A tool that generates a single image leaves composition, context, and sequencing to the human. An agent that orchestrates an entire production absorbs those decisions. The distance between "useful assistant" and "replacement" compresses, and the sentiment data suggests creative professionals see that compression clearly.
AI game development faces a version of this tension magnified by production scale. Game development involves years-long cycles where art direction, narrative coherence, and technical integration depend on close collaboration across disciplines. An agent that accelerates one stage of that pipeline can create bottlenecks or quality mismatches downstream if the production process isn't designed to absorb the change.
Studios deploying agentic AI creative tools will need to treat workforce sentiment as a real adoption variable. The professionals most directly affected — visual artists, animators, narrative designers — are also the most resistant. Deployment strategies that ignore this dynamic risk generating internal friction that offsets whatever efficiency the technology delivers.
What Follows
The UK's Creative Content Exchange pilot, targeting a summer 2026 launch, is the most concrete near-term development in creative AI licensing. If it establishes a workable model for licensed training data to flow between rights holders and AI developers, it could shape norms internationally. If it stalls, expect more litigation to fill the gap.
The agentic tools that shipped in March are early releases. Luma Agents is weeks old. Project Moonlight remains in private beta. Amazon's production suite is in closed beta. Whether adoption accelerates will depend less on what these systems can generate and more on whether organizations build the governance, audit, and workforce infrastructure to deploy them responsibly — before regulators and courts make those decisions for them.
Ready to put this into practice?
Browse agents and workflows that use these ideas, or create a free account to try them now.
Free to start. No card required.

