AI Governance — ELVIS Act
Tennessee’s ELVIS Act takes effect 1 July 2025, obliging music, media, and AI platforms to prove right-of-publicity controls, consent workflows, and incident reporting that prevent unauthorized voice or likeness cloning.
Accuracy-reviewed by the editorial team
Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act becomes enforceable on 1 July 2025, modernising the state’s right-of-publicity law to cover generative AI and synthetic media. The statute grants civil remedies and criminal penalties for using an individual’s name, image, or voice—including AI-generated imitations—without consent. Music labels, streaming services, content platforms, advertisers, and AI developers operating in or targeting Tennessee must be able to show governance, evidence, and reporting mechanisms that prevent unauthorized cloning. this analysis sets out the controls needed for compliance.
Scope and applicability
The ELVIS Act applies to any entity that makes, distributes, transmits, or commercially exploits an individual’s voice or likeness in Tennessee. It covers training, producing, or hosting generative AI models that replicate or simulate a person, as well as marketplaces that sell AI-generated voice packs. The law protects living and deceased individuals, with executors helped to enforce rights. Governance programs must map Tennessee customer touchpoints, contract relationships with creators, and data processing activities to identify where the Act applies.
Governance and accountability framework
Designate an executive owner—often the chief legal officer or chief trust officer—with a direct reporting line to the board. set up a cross-functional governance forum that includes legal, privacy, cybersecurity, product, talent relations, and compliance. Update committee charters to include synthetic media risk oversight, consent management, and incident escalation. Document decision rights for approving new AI models, licensing deals, and creator partnerships, and ensure the board receives quarterly ELVIS Act compliance dashboards.
Consent management and documentation
The Act requires express consent before capturing, synthesizing, or distributing someone’s voice or likeness. Implement a consent repository that stores agreements, licensing terms, scope of usage, revocation rights, and expiration dates. Embed consent verification into content ingestion workflows so no file proceeds to production without validated rights. For legacy catalogs, conduct a remediation program to collect updated consents or document risk-based exceptions approved by legal. Maintain audit trails demonstrating the consent check at each stage of production, distribution, and monetization.
Model governance and training data controls
Generative AI systems must be trained on data with appropriate rights. Maintain detailed datasets inventories showing source material, licensing status, and usage restrictions. Require model cards that document training data provenance, risk assessments, and mitigations. Apply access controls and monitoring to training datasets, ensuring developers cannot introduce unauthorized samples. Conduct periodic audits comparing model outputs to known artists to identify potential mimicry risk.
Content review and release controls
Implement pre-release review workflows that flag content for ELVIS Act risk. Use content fingerprinting, voice similarity detection, and metadata scans to identify potential impersonations. Require legal sign-off for high-risk releases, including advertising campaigns, AI voice marketplaces, and user-generated content promotions. For platforms hosting user uploads, update community guidelines, moderation policies, and automated filters to detect and block unauthorized voice or likeness use.
Third-party and platform governance
Assess third-party vendors providing voice synthesis, dubbing, translation, or marketing services. Update contracts to include ELVIS Act compliance clauses, rights warranties, indemnities, and audit rights. Require vendors to supply evidence of consent management and AI governance practices. set up a vendor assurance program with annual questionnaires, control testing, and risk scoring. Maintain records of vendor oversight committees and remediation actions.
Incident management and reporting
Develop an incident response playbook tailored to synthetic media misuse. Define severity tiers, notification timelines, and responsibilities for engaging affected individuals, their representatives, and law enforcement when appropriate. Document procedures for takedown requests, litigation holds, and cooperation with Tennessee authorities. Maintain an incident log capturing discovery method, containment steps, corrective actions, and lessons learned. Report significant incidents to the board and include them in quarterly dashboards.
Evidence pack and audit readiness
Create an evidence room organized by control domain: governance, consent, AI model management, content review, vendor oversight, and incident response. Store policies, standard operating procedures, committee minutes, risk assessments, and audit reports. Include system screenshots showing consent verification checkpoints, content moderation dashboards, and monitoring alerts. Track key performance indicators such as percentage of catalog with refreshed consents, number of flagged content items, and time-to-remediate incidents.
Reporting workflows and board updates
Implement a monthly operational report to executives covering consent expirations, outstanding vendor attestations, AI model changes, and incident statistics. Provide the board or risk committee with quarterly summaries featuring compliance status, high-risk content decisions, and regulatory developments in other states considering similar laws. Align reporting cycles with SEC disclosure controls if synthetic media risks are material to investors.
Artist and stakeholder engagement
Build transparent communication channels with artists, voice actors, and estates. Maintain logs of outreach efforts, consent renewal campaigns, and grievance handling. Document training sessions explaining how the company protects likeness rights and uses AI responsibly. Capture stakeholder feedback and show how it influences policy updates or product design decisions. Publicly available transparency reports can strengthen trust and show early compliance.
Technology tooling and monitoring
Deploy technical safeguards such as watermarking, synthetic media detection, and voiceprint authentication. Integrate tools into production pipelines and user platforms to block uploads lacking consent metadata. Maintain logs of detection alerts, analyst reviews, and enforcement outcomes. For AI marketplaces, implement identity verification for both creators and customers, and require traceable transaction records.
Training and culture
Deliver targeted training to product managers, engineers, marketing teams, and content moderators on ELVIS Act obligations, emphasizing real-world scenarios. Track completion metrics, comprehension assessments, and follow-up coaching. Reinforce expectations through code of conduct updates, performance objectives, and leadership messaging about respecting artists’ rights.
Preparing for multi-jurisdictional expansion
Tennessee’s regime is influencing other U.S. states exploring similar protections. Maintain a regulatory watchlist and scenario plan showing how governance controls scale to California, New York, or federal proposals. Document board briefings on potential expansion, resource impacts, and alignment with international right-of-publicity rules. Ensure evidence packs can be adapted quickly to new jurisdictions.
Immediate next steps
Before 1 July 2025, finalize policy updates, refresh consent inventories, test detection technology, and rehearse the incident response playbook. Schedule a board review in June to confirm readiness, approve residual risk acceptances, and allocate budget for ongoing monitoring. Post go-live, conduct quarterly control testing and independent assurance to validate effectiveness, documenting outcomes and remediation in the evidence room.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Procurement Governance Guide
Structure AI procurement pipelines with risk-tier screening, contract controls, supplier monitoring, and EU-U.S.-UK compliance evidence.
-
AI Workforce Enablement and Safeguards Guide
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Model Evaluation Operations Guide
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.
Further reading
- State of Tennessee: Governor Lee signs ELVIS Act protecting artists from AI deepfakes (March 21, 2024) — tn.gov
- Public Chapter 685: Ensuring Likeness, Voice, and Image Security Act — tn.gov
- ISO/IEC 42001:2023 — Artificial Intelligence Management System — International Organization for Standardization
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.