AI Governance Briefing — March 21, 2024
Tennessee’s ELVIS Act modernizes the state’s right of publicity to cover AI voice and likeness cloning, creating new compliance duties for music, media, and platform workflows before the July 2024 effective date.
Executive briefing: Tennessee enacted the Ensuring Likeness, Voice, and Image Security (ELVIS) Act (SB2096/HB2095) on March 21, 2024. The law updates Tennessee’s right of publicity statute to prohibit unauthorized AI-generated replicas of a person’s voice or likeness and introduces statutory damages of up to $150,000 per violation starting July 1, 2024. Labels, streaming platforms, and generative audio vendors must now harden consent management and provenance controls across recording and distribution workflows.
Control checkpoints
- Document consent workflows. The ELVIS Act requires written consent before using an individual’s voice or likeness in an AI-generated performance. Update talent agreements, session paperwork, and digital release forms to capture explicit authorization.
- Tag training data provenance. Section 47-25-1105 mandates that anyone training AI on a Tennessean’s voice or likeness maintain records showing consent. Extend metadata schemas to store consent IDs alongside audio stems and model checkpoints.
- Bolster distribution reviews. Platforms that knowingly host infringing replicas can be liable. Enhance content moderation pipelines with voice fingerprinting, watermark verification, and takedown SLAs aligned to the statute’s knowledge standard.
- Coordinate multistate rights. Align Tennessee controls with existing California and New York publicity laws so artists and rights holders experience a consistent clearance process.
Action plan
- Launch a joint legal–engineering tiger team to update sample clearance tools, contract templates, and CMS workflows before the July 1 effective date.
- Embed authenticity attestations and watermark detection in delivery pipelines to major streaming platforms to demonstrate due diligence if challenged.
- Educate artists and managers about the new private right of action and statutory damages so they can escalate suspected misuse quickly.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




