INTRODUCTION
Generative AI today can resurrect; it can synthesise a voice, reconstruct a face, and reproduce the mannerisms of people who are no longer alive. These “post-mortem personae” are not only emotionally fraught; they are commercially valuable, easily distributable and legally unsettled. The question is urgent and concrete- who controls a deceased person’s voice, image and stylistic persona in an era when a single dataset plus an inexpensive model can create convincing, monetisable look-and-sound-alikes? This essay maps the legal fault lines, examines recent national and transnational developments, and offers a practical, scholarship-grade regulatory blueprint that balances dignity, creativity and enforceability. The argument: law should move from case-by-case emergency relief to a harmonised statute-plus-platform approach that:
- Recognises limited post-mortem personality rights.
- Mandates provenance and labelling.
- Creates interoperable cross-border remedies.
HISTORICAL AND DOCTRINAL CONTEXT
Personality protection in common law has historically been a bricolage of defamation, privacy, passing off, trademark and the American “right of publicity” The U.S. Supreme Court’s decision in Zacchini v Scripps-Howard Broadcasting Co[1] remains a lodestar for the performer’s economic interest in control over exploitation of their act, while Lugosi v Universal Pictures[2] illustrates doctrinal limits, at least under California law historically on whether such rights survive death. These precedents expose the tension the AI problem now exacerbates: an old law meant to control broadcast exploitation does not map neatly onto automated, global, on-demand synthesis that can be copied infinitely and transmitted across jurisdictions.
In many jurisdictions, the right of publicity (or personality right) is statutory and varies by scope and duration; in others, it is judge-made and piecemeal. That fragmentation creates predictable uncertainty for creators, estates, platforms and would-be users of synthetic content, which in turn fuels both over-deterrence (stifling legitimate parody & scholarship) and under-enforcement (allowing commercial misappropriation).
RECENT DEVELOPMENTS
United States (federal and state action): 2024-2025 saw accelerated legislative responses. At the federal level, several proposals, notably the NO FAKES Act (S.4875), seek to create express protections for voice and visual likenesses against unauthorised AI replication; related federal measures target non-consensual intimate imagery and compel platform takedowns. Simultaneously, states like California enacted targeted statutes in 2024 (AB 1836[3]; AB 2602[4]) that:
- Bar certain uses of “digital replicas” of deceased performers without estate consent.
- Limit unconscionable contractual strips of digital rights from living performers. Those laws demonstrate a practical model: statutory definitions, consent thresholds, and contractual protections tailored to the entertainment ecosystem.
India, an emergent judicial practice: Indian courts have already begun to grapple with voice cloning. The Bombay High Court granted ad-interim relief to prominent singer Arijit Singh in 2024[5], restraining various AI platforms and sellers from using his voice and other personality attributes without consent; subsequent petitions (and interim relief requests by other artists) indicate an evolving, judicially-driven protection of personality attributes in the Indian context[6]. These orders are important because they show courts can adapt existing doctrines (passing off, breach of statutory or equitable duties, moral rights) to stop misuse, but they also highlight the piecemeal and slow nature of litigation compared with the lightning speed of automated cloning.
Representative disputes and market reaction: Estates have not waited: the George Carlin estate’s 2024 litigation against creators of an AI-generated “special”[7] dramatized the collision of copyright, publicity and emergent synthetic-speech claims. Settlements, takedowns and industry pressure (including from guilds and rights intermediaries) are shaping practice even where statutory law lags. Internationally, the EU’s AI Act and pending national measures (including proposals in Denmark to extend copyright-style protections to an individual’s physical features/voice) show that states are converging on the proposition that synthetic replicas require bespoke legal treatment.
CORE LEGAL GAPS & POLICY CHALLENGES
Four structural deficits make ad hoc litigation insufficient:
Temporal Uncertainty: Do personality rights persist? If yes, for how long? Some US states have expressly extended post-mortem publicity rights; elsewhere, courts (e.g. Lugosi) have denied discernibility, leaving heirs and estates in limbo. That uncertainty depresses market transactions and invites opportunistic exploitation.
Attribution and Provenance: Synthetic outputs can be produced by entirely opaque pipelines. Courts and platforms currently lack standardised metadata or audit trails for models, training data, and transformation steps, making notice and takedown, plus evidentiary attribution, expensive and slow.
Jurisdictional Friction: Models learned in one country, hosted in a second country and applied globally raise difficult questions of applicable law, discovery and enforcement, especially when interim relief is needed to prevent irreparable reputational harm.
Balancing Expression, Estate Interests: A property right that is too strong can chill parody, scholarship and historical re-creation as a form of overreach; too weak a rule could commodify personalities without remedy. The difficult task is to develop calibrated, contextual limits (e.g. using non-commercial exceptions, expressive carve-outs, along with safeguards)
A PRACTICAL REGULATORY FRAMEWORK: PRINCIPLES & PROPOSALS
Below is a blueprint that aims to be politically neutral, administrable and globally interoperable:
Statutory Post-Mortem Personality Right: Create a statutory “post-mortem personality right” that covers voice, image, name and distinctive performance style, vesting in a designated estate representative and registrable in a central public registry. The right should be time-bounded (e.g. 25-50 years) to balance the estate’s economic interests with public domain considerations, and should allow limited expressive exceptions (scholarship, satire) with narrowly drawn safe harbours.
Rationale: registration reduces search costs and facilitates notice; a time-bound right avoids perpetual property claims that choke cultural reuse. California’s AB 1836/2602 model, which pairs consent requirements with contractual protections, is instructive.
Digital-Will & Pre-Consent Mechanism: Allow natural persons to record, in a recognised “digital will,” granular preferences for future AI use of their persona (consent; limited uses; prohibition; designated estate rep). In the absence of instructions, default presumptions allocate control to the estate representative who must act in the deceased’s best interests and in line with cultural/dignity considerations.
Rationale: foresighted consent is the cleanest solution; a digital will prevents litigated guesswork and respects autonomy.
Mandatory Provenance and Labelling: Require platforms and distributors to deploy standardised provenance metadata (model ID, training data provenance summary, creator/processor identity) and to apply clear, persistent labels when content reproduces a real person’s appearance or voice (live label + machine-readable provenance token).
Rationale: Labels empower consumers and enable regulators to triage takedown requests and to attribute liability. WIPO and industry commentaries have urged similar measures to address evidentiary opacity.
Platform Duties + Modified Safe-Harbour –
Preserve intermediary protection for neutral hosts, but condition safe-harbour on-
- Reasonable provenance-disclosure practices,
- Expedited verified-notice takedown for unauthorised replicas,
- Reservation of evidence pending litigation. Repeat or deliberate facilitators of the monetisation of unauthorised replicas should lose safe-harbour protections.
Rationale: platforms must be part of the solution without becoming default insurers of all content.
Remedies – Civil & Narrow Criminal Backstops –
Provide statutory civil remedies (injunctions, statutory damages scaled to commercialisation, disgorgement) for estate claims; reserve criminal liability only for aggravated cases involving fraud, impersonation causing financial harm, or targeted reputational attack. Promote quicker ex parte relief with rigorous post-relief review to prevent censorship as an alternative to drawn-out litigation.
CASE STUDIES: WHAT THE LAW IS TEACHING US?
Legislative clarity can be seen in California’s statutes (AB 1836 & AB 2602), which define “digital replica,” consent thresholds, and performer contract protections. These bills show how proactive statutory design can set industry standards and avoid litigation war rooms.
Interim reliefs from the Bombay High Court (Arijit Singh) demonstrate that while courts can act swiftly to safeguard personality traits under established doctrines, judicial patchwork cannot replace customized statutes that offer predictability and registration. Indian orders demonstrate the court’s strong support for protective measures and emphasize the pressing need for legislative policy[8] (cross-border discovery tools, registration).
George Carlin estate litigation underscores the tangled mix of copyright, moral rights and publicity claims when AI produces “new” works in a deceased artist’s voice; settlements and removal orders evidence practical market limits on such experiments. The case demonstrates that litigation can deter bad actors but cannot scale as a systemic remedy.
INTERNATIONAL COORDINATION: WHY IT MATTERS?
Synthetic replicas cross borders effortlessly. The EU AI Act’s treatment of deep fakes[9] and its transparency obligations signal an emergent regulatory floor in Europe that other jurisdictions can emulate. Parallel initiatives in the U.S. (NO FAKES; TAKE IT DOWN) and national policy experiments (Denmark’s proposals[10] to recognise likeness-interests) suggest a window of convergence, one that India should not miss. Internationally-coordinated model rules (WIPO or Hague Conference) could standardise definitions (what is a “digital replica”), registrar models, and cross-border enforcement procedures.
COMMENTARY – ETHICAL & DOCTRINAL REFLECTIONS
Three normative notes for scholars and drafters:
Dignity is not only a Sentiment: post-mortem replicas affect surviving relatives, cultural memory and democratic discourse. Legal regimes should foreground dignity and informed consent as distinct values, not only economic interests.
Technical fixes shouldn’t be fetishised; provenance metadata is important but insufficient. Since small estates might not have the resources to police misuse, low-cost private enforcement mechanisms (statutory damages, platform-level dispute portals) are crucial. Litigation and redress must continue to be user-centric.
Preserve Expressive Space: historical recreation, scholarship, parody, and biography should all be protected; to prevent stifling free speech, the statutory design should include administrative review procedures and limited exceptions.
CONCLUSION
AI-synthesised post-mortem personae will proliferate rapidly. Left unchecked, they will produce reputational harms, commercial misappropriation and cultural distortion; regulated properly, they can enable ethical artistic reuse and preserve legacies under terms that respect the deceased’s autonomy and the public’s interest. The immediate priorities for a workable reform package are clear:
- Legislate a registrable, time-bounded post-mortem personality right.
- Enable digital wills and estate registration.
- Require provenance and mandatory labelling for synthetic content.
- Condition platform safe-harbours on due diligence and swift verified takedown procedures. These reforms, coupled with international coordination and proportionate remedies, will make the law fit for our age of synthetic memory. The legal community must act now: the technology is already doing tomorrow’s copying today.
Author’s Name: Kanishika Talwar (St. Soldier College of Law, Jalandhar)
[1] Zacchini v Scripps-Howard Broadcasting Co [1977] 433 US 562
[2] Lugosi v Universal Pictures [1979] 25 Cal 3d 813
[3] Lucille M White, ‘California Enacts a Suite of New AI and Digital Replica Laws’ (Manatt, 25 September 2024) <https://www.manatt.com/insights/newsletters/client-alert/california-enacts-a-host-of-new-ai-and-digital-rep> accessed 30 October 2025
[4] Ibid
[5] Arijit Singh v Codible Ventures LLP (2024) SCC OnLine Bom 2445
[6] Simranjeet, ‘AI tools infringe individual’s right to control and protect their own likeness/voice’; Bombay HC grants ad-interim injunction in favour of Arijit Singh to protect his personality rights’ SCC Online Times (02 August 2024) <https://www.scconline.com/blog/post/2024/08/02/bomhc-grants-ad-interim-injunction-to-arijit-singh-to-protect-his-personality-rights/> accessed 30 October 2025
[7] Andrew Dalton, ‘George Carlin estate sues over fake comedy special purportedly generated by AI’ AP News (26 January 2024) <https://apnews.com/article/george-carlin-artificial-intelligence-special-lawsuit-39d64f728f7a6a621f25d3f4789acadd> accessed 30 October 2025
[8] Antonios Baris, ‘Publicity rights in the AI era: Key takeaways from artist Arijit Singh’s recent legal Victory in India’ (IPKat, 31 August 2024) <https://ipkitten.blogspot.com/2024/08/publicity-rights-in-ai-era-key.html> accessed 30 October 2025
[9] ‘AI Act enters into force’ (European Commission, 01 August 2024) <https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_en> accessed 30 October 2025
[10] Miranda Bryant, ‘Denmark to tackle deepfakes by giving people copyright to their own features’ The Guardian (27 June 2025) <https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence> accessed 10 October 2025


