Generative AI on Set: Cybersecurity and IP Risks for Film, TV, and Music Production in 2026

The 2026 production environment looks nothing like 2022. Pre-viz teams are using Runway and Sora to mock up shots before greenlight. Editors are using ElevenLabs to clone an actor's voice for ADR they couldn't schedule. Marketing departments are using Midjourney to generate poster comps the day after lock. Music supervisors are using Suno to scratch-track temp scores before clearance. Everyone is moving faster. Almost no one is thinking about where the data goes when they hit "submit."

The risk surface that opened up between 2023 and 2026 is the largest in the history of the industry's content security work. Scripts uploaded to public AI tools become part of training data. Cast voices recorded for legitimate ADR end up in models accessible by other paying customers. Crew members ask ChatGPT to summarize a confidential treatment and inadvertently expose the entire premise of an unreleased project. A 2025 investigation reported that AI companies have used over 130,000 film and TV scripts to train generative AI models — much of it without permission, and a significant portion sourced from precisely these kinds of accidental exposures.

This guide covers what the actual risk vectors look like in 2026, what SAG-AFTRA's ratified AI clauses now require of producers, what the MPA's updated TPN Content Security Best Practices say about AI tooling, and how production companies can use AI without losing the film, the cast contract, or their TPN certification.


AI Tool Risk Matrix for Film & TV Production Data sensitivity (Y) × External exposure of AI tool (X) Data Sensitivity → Vendor Exposure / Training Risk → CAUTION DO NOT USE Confidential script + public ChatGPT Cast voice + free ElevenLabs Raw footage + open Sora prompt SAFE Public reference images Open-source mood research CAUTION Generic mood prompts on public AI tools HIGH LOW Internal Sandbox Public Free Tier
The combination of high-sensitivity data and public AI tools is the dominant exposure pattern in 2024–2026 production breaches.

The Three Categories of Risk No One Is Tracking

The traditional content security model — physical access controls, watermarking, encrypted file transfer, vetted post houses — was built for a world where assets moved between known parties. Generative AI breaks that model by introducing a fourth party into every workflow: the model vendor.

1. Training data leakage

Most public-tier AI tools, by default, store user inputs and use them to improve their models. Legal analysis from Troutman Pepper Locke highlights that "free" or consumer-grade access to ChatGPT, Claude, Gemini, Midjourney, ElevenLabs, and similar tools generally includes terms that allow the vendor to retain user content for training, evaluation, and abuse detection. The implication for production: a script pasted into ChatGPT to "summarize for a pitch meeting" can become part of training data, and parts of it may surface in model outputs available to any other user.

This is not a hypothetical. Multiple cases through 2024–2025 documented model outputs that contained passages clearly derived from unreleased scripts, internal pitch documents, and pre-release marketing copy. Once a piece of confidential content has been ingested into a public model, there is no practical way to extract it.

2. Cast and crew likeness exposure

Voice cloning tools require minimal source material. SAG-AFTRA's AI bargaining timeline documents that the union's foundational position is "every person has an inalienable right to their name, voice, and likeness." When a production uploads cast audio to a third-party AI tool — even for a legitimate purpose like ADR or dub matching — that tool may retain the audio, may use it to train voice models, and in some cases makes those voice profiles available for other paying customers to use.

The 2025 ratified SAG-AFTRA Commercials and Interactive Media agreements treat unauthorized digital replica use as a contract violation that triggers grievance proceedings. A producer who uploaded a lead actor's voice to a public-tier ElevenLabs account to clone an ADR line is technically in breach the moment the audio is uploaded, regardless of whether anyone outside the production ever hears the synthesized output.

3. Shadow AI on the production

The risk most invisible to production IT and the least covered in TPN audits is shadow AI: department members using AI tools the production hasn't sanctioned, with production-confidential content. A script supervisor pastes a scene into Claude to ask about continuity. An assistant editor uses Sora to generate a quick reference shot. A music supervisor drops a temp track into Suno to generate a stylistic variation. None of this hits the production's security review. None of it shows up in vendor audits. All of it routes confidential assets through third-party AI infrastructure that doesn't have a content security agreement with the studio.

The 2025 AI Film Festival, run by Runway, saw approximately 6,000 submissions, up from 300 in 2023. The same growth is happening inside professional productions — just without the festival visibility. Crews are using these tools because they work and because nobody on the production has told them not to.


What SAG-AFTRA's 2025 AI Clauses Actually Require

The post-strike SAG-AFTRA AI framework is now the operating standard for any union production that uses AI in any capacity. The core requirements, summarized from the union's AI bargaining timeline and the Authors Guild summary of ratified AI safeguards:

SAG-AFTRA AI Compliance Gates for Productions

  • Informed prior written consent for any digital replica use of voice, likeness, or performance
  • Reasonably specific description of how the replica will be used — generic consent language is no longer sufficient
  • 48-hour minimum notice before any body or face scanning session
  • Compensation on-scale for the time the digital replica is performing, equivalent to in-person performance
  • Suspension rights allowing performers to revoke consent for new generated material during a strike
  • Disclosure obligations when generated material is used in a final product
  • Contractual delineation between Employment-Based Digital Replicas (created during contracted work) and Independently Created Digital Replicas (created outside the employment context)

The May 2025 ratified Commercials Contracts and the July 2025 ratified Interactive Media Agreement both include these requirements, with stronger language than the original 2023 strike resolution. Industry counsel reporting on deepfake contracts describes the new norm as "consent has to be specific, contemporaneous, and informed — and the production carries the burden of proof if it's challenged."

The practical implication is that AI use on a production is no longer a creative or technical decision — it's a contracts decision. Any AI workflow that touches cast voice or likeness needs to be reviewed against the applicable AI clause before deployment, and the consent paperwork has to be specific to the actual planned use.


What the MPA TPN Now Requires for AI Tooling

The Motion Picture Association's Trusted Partner Network released version 5.3.1 of the Content Security Best Practices in August 2025, with a new four-tier Shield System launching September 9, 2025. For the first time, the standard includes an explicit Organizational Artificial Intelligence & AI/ML Security section.

The TPN's AI requirements, summarized from the official update notice:

The TPN also released free policy templates covering AI security, which means there's no excuse for vendors to operate without documented procedures. A post house, VFX vendor, or sound designer that hasn't updated to the v5.3.1 AI requirements will fail audits and lose tier-one client work in 2026.


The Three Incidents That Changed the Conversation

The current security posture in production isn't the result of an industry-wide planning exercise. It's the result of specific high-profile incidents that forced studios and unions to act.

The voice-cloning controversies of 2025. Best Picture nominees Emilia Pérez and The Brutalist generated significant backlash for their use of Respeecher voice technology to enhance non-native language performances. The use was contractually permitted and limited in scope, but it triggered a broader industry conversation about disclosure obligations and the boundary between creative tool and replacement of performance. By 2026, films are increasingly carrying explicit AI usage statements in marketing and credits, both to qualify for awards and to head off the backlash that hit those 2025 films.

The 130,000-script training corpus revelation. The 2025 reporting on AI training data drawn from film and TV scripts crystallized for many studios that the threat wasn't theoretical. Real scripts, including unreleased ones, were in real models. The downstream legal questions — whether the use is fair, whether the studios can recover, whether anyone is liable for individual leaks — are still being litigated, but the operational lesson was immediate: stop sending scripts to public AI tools.

The German collective bargaining agreement. In October 2024, German producers' associations and actors' unions signed a detailed AI annex to the collective bargaining agreement covering film and TV. This was the first multi-stakeholder framework outside the SAG-AFTRA model and signaled to U.S. producers operating internationally that the regulatory landscape would not stay U.S.-centric. International productions now have to navigate multiple union and national frameworks simultaneously.


The Control Set for Production Companies Using AI Safely

The good news: AI is genuinely useful in production, and there's a workable path to using it without violating SAG-AFTRA, breaching TPN, or accidentally training a public model on your unreleased IP. Here's the control set we recommend for production companies and the post houses that serve them.

Approved tooling list, with enterprise-tier only

The single most important control is a published list of approved AI tools, with explicit configuration requirements. Free or consumer-tier accounts are not approved. Enterprise tiers with no-training data agreements (OpenAI Enterprise, Anthropic Claude Enterprise, Microsoft Copilot for Microsoft 365, ElevenLabs Enterprise, Adobe Firefly Enterprise) are the floor. Internally hosted, sandboxed models — running on the production's infrastructure with no external API calls — are the ceiling.

Any tool not on the approved list is implicitly forbidden for production-confidential work. Crew using non-approved tools on confidential content is a security incident, not a productivity choice.

Content classification and routing

Not all content is equally sensitive. Public-domain reference images for mood research are fine on consumer tools. A confidential script is not. The control is a clear classification system and a routing policy: confidential content routes only to enterprise or internal AI tools; non-confidential reference work can use whatever the user prefers. Most production breaches we see come from skipped classification — someone defaulted to the convenient public tool because the content "didn't seem that sensitive."

Cast and likeness controls tied to contracts

Any AI tool that processes cast audio, video, or likeness has to be checked against the cast member's contract before use. This is a workflow control: before audio goes to ElevenLabs Enterprise (or any cloning tool), production legal has confirmed the cast member's contract permits that specific use, and the consent paperwork is on file. The same control applies for face replacement, body scanning, and any synthetic performance generation.

The friction this introduces is the point. The cost of a SAG-AFTRA grievance proceeding is significantly higher than the cost of slowing down to verify consent.

Logging and audit

Every use of AI on production-confidential content needs to be logged: which tool, which user, which content, what output was generated. Most enterprise AI tools provide this logging natively. The discipline is requiring that the logs be reviewed and retained, not just generated. When a leak surfaces or a SAG-AFTRA challenge is filed, the production needs to be able to reconstruct the AI usage history.

Staff training on the real rules

Most crew members don't understand the security implications of the tools they're using because nobody has told them. Training has to cover: which tools are approved, which content can go through which tool, the cast contract implications, the TPN implications, and the personal accountability that follows a leak. This is straightforward training that most productions skip because "it's just IT stuff." It isn't. The most expensive AI security incidents in 2025 were caused by sophisticated professionals making routine, uninformed choices.


How to Build This Into an Actual Production Security Program

For a production company or post house running 2–10 active projects at a time, the realistic implementation sequence:

Week 1: Inventory the AI in use. Survey department heads and crew. What tools are people actually using? Which accounts are personal versus production? What content has gone through which tool? You will be surprised. Most productions discover at least three or four shadow AI tools they didn't know were in use.

Weeks 2–3: Define the approved tooling list and content classification. Work with the content security lead, post supervisor, and legal to land on a tool list that covers the legitimate creative needs and a classification scheme that matches your existing content security tiers. Communicate the policy in writing.

Weeks 3–6: Migrate active projects to approved tools. Provision enterprise accounts for the staff who need them. Decommission shadow accounts. For projects mid-flight, log the historical usage so you have an audit trail for any later challenges.

Weeks 6–8: Train crew on the new policy. Run sessions for each department: how AI use intersects with cast contracts, with TPN, with the production's content security obligations. Make the rules concrete: "do this, don't do that, here's what to do if you're not sure."

Ongoing: Monitor, audit, and respond. The same way the production runs daily security checks on physical assets, AI usage needs ongoing monitoring. Most enterprise tools provide admin dashboards that show usage by user, by project, by content type. Review them. When something looks off, investigate.

This program is the kind of work that production IT and content security leads typically can't do entirely in-house, especially on smaller productions. Specialist managed security partners who understand the entertainment industry can build and operate the policy, monitoring, and incident response components, while the production retains creative and contractual authority. IT and security services for entertainment companies from a partner that already speaks TPN and SAG-AFTRA fluently is qualitatively different from a generic IT vendor that has to learn the industry on your dime.


What Comes Next for AI in Production

The trajectory through 2026 and beyond is set. AI tools will get more powerful and more integrated into every department. Studios will tighten consent and disclosure requirements as more contracts are negotiated under the post-strike framework. Insurance carriers — already tightening cyber underwriting — will start asking specific questions about AI usage policies as a coverage condition. The TPN tier system will continue to evolve, with AI controls becoming a larger share of the audit weight.

The productions that come through this transition cleanly will be the ones that stopped treating AI as an experimental tool and started treating it as part of the content security perimeter — same access controls, same logging, same accountability as any other vendor in the workflow. The productions that don't will keep ending up in trade-press incident stories, insurance claim disputes, or union grievance proceedings.

If your production company, post house, or studio operations team is trying to figure out where you stand against the 2026 SAG-AFTRA, MPA TPN, and cyber insurance baselines, schedule a free consultation. We'll review your current AI usage, your content security posture, and your contractual exposure, and walk through the prioritized remediation list specific to your operation. No commitment. No sales pitch dressed as a discovery call.