Overview: The Dawn of the 'Identity Theft' Era in Generative AI

On March 12, 2026, the tech world was rocked by a landmark legal filing that could redefine the boundaries of intellectual property and digital persona in the age of artificial intelligence. Grammarly, the ubiquitous AI-powered writing assistant, has become the target of a high-profile class-action lawsuit alleging what plaintiffs describe as "systemic identity theft." The core of the dispute centers on Grammarly’s "Expert Review" feature—a sophisticated AI tool designed to mimic the stylistic nuances, logical frameworks, and professional personas of world-class writers and editors without their explicit consent.

Led by renowned investigative journalist and author Julia Angwin, the lawsuit represents a significant escalation in the ongoing conflict between generative AI developers and the creative class. While previous legal battles (such as those involving OpenAI and Getty Images) focused primarily on copyright infringement and the use of datasets for training, the Grammarly case introduces a more personal and existential grievance: the unauthorized cloning of a professional identity. The plaintiffs argue that Grammarly did not merely "learn" from their writing; it sought to replace the humans themselves by offering an AI substitute that carries their specific professional "brand."

As of March 13, 2026, Grammarly has responded by abruptly disabling the controversial feature, stating it will "stop using AI to clone experts without permission." However, the legal and ethical fallout is only beginning. This case serves as a critical turning point for the industry, highlighting the shift from AI as a tool for assistance to AI as a mechanism for impersonation.

Details: The Case of Julia Angwin vs. Grammarly

The 'Expert Review' Feature and the Allegation of Cloning

According to the complaint, Grammarly’s "Expert Review" feature promised users the ability to have their documents polished by AI models that were specifically fine-tuned to replicate the expertise of top-tier professionals. Among these "experts" was an AI persona that allegedly mirrored the investigative rigors and stylistic hallmarks of Julia Angwin, a Pulitzer Prize-winning journalist known for her work at The Wall Street Journal and ProPublica.

The lawsuit alleges that Grammarly ingested decades of Angwin’s published works, interviews, and public appearances to create a "digital twin" capable of providing feedback that users would perceive as being of the same caliber as Angwin’s own editing. Crucially, the plaintiffs argue that this goes beyond "Fair Use." In traditional copyright law, a student can study a master’s work to learn a style. However, the lawsuit contends that Grammarly has commercialized a personality, creating a direct market substitute for the human expert’s services.

The Plaintiff's Argument: Beyond Copyright

Julia Angwin and her legal team emphasize that this is a matter of "Right of Publicity" and "Moral Rights." They argue that a writer’s identity is their primary asset. By offering an "Angwin-style" review, Grammarly is essentially stealing the reputation and the "human capital" built over a lifetime of professional labor. This sentiment echoes the concerns raised in the broader AI investment landscape, where the pursuit of profit often overrides ethical considerations of data provenance—a trend we have seen in the shifting loyalties of major VCs between OpenAI and Anthropic.

Grammarly’s Rapid Retraction

In the wake of the filing, Grammarly issued a statement on March 13, 2026, confirming it would disable the feature. The company maintained that its intentions were to provide high-level educational value to its users but acknowledged that the "opt-out" nature of their training data collection was no longer sustainable in the current legal climate. This move mirrors the broader industry trend where companies are being forced to pivot toward "Personal AI" frameworks that respect individual boundaries, much like the $100 billion infrastructure bet by Meta to facilitate localized, personal superintelligence.

Discussion: The Pros and Cons of AI Expert Simulation

The Arguments in Favor (The Vision of Democratized Expertise)

  • Accessibility: AI expert cloning could democratize access to high-level mentorship. Most students and early-career professionals cannot afford a session with a Pulitzer Prize winner. An AI version provides a "lite" version of that guidance for a fraction of the cost.
  • Efficiency and Scale: Human experts are a finite resource. AI clones can work 24/7, providing instantaneous feedback to millions of users simultaneously. This is the same logic driving the automation of executive decision-making (the 'AI CEO')—maximizing throughput where human bandwidth is the bottleneck.
  • Evolution of Learning: Proponents argue that AI models are simply the next generation of textbooks. Just as we learn from reading a book, an AI learns from processing data.

The Arguments Against (The Ethics of Identity)

  • Identity Theft and Brand Dilution: If an AI can mimic an expert's "voice" perfectly, the value of the original expert's brand is diminished. Why hire the real Julia Angwin for a keynote or a consultation if a $20/month subscription offers her "essence"?
  • Lack of Consent: The most egregious point in the Grammarly case is the lack of an "opt-in" mechanism. Creators found their likenesses being used to train tools that essentially compete against them.
  • The 'Hallucination' of Expertise: AI clones may mimic style without possessing substance. An AI that sounds like an investigative journalist might still provide factually incorrect or ethically dubious advice, potentially damaging the real person's reputation by association.
  • Architectural Risks: As we see with new models like Inception Labs' Mercury 2, which uses diffusion for reasoning, the ability of AI to simulate complex human thought processes is becoming frighteningly accurate, making the "cloning" feel even more invasive.

Conclusion: A New Social Contract for the AI Era

The Grammarly class-action lawsuit is more than just a legal dispute over terms of service; it is a battle for the soul of professional identity in the 21st century. It forces us to ask: What belongs to the individual, and what belongs to the "commons" of data? As AI moves from generating generic text to simulating specific human personas, the legal framework must evolve from simple copyright to a more robust protection of "Digital Identity Rights."

Grammarly's decision to stop cloning experts without permission is a tactical retreat, but it signals a strategic shift for the entire AI industry. Companies can no longer treat the internet as a free buffet of human experience. The future of AI will likely involve a licensing model where experts are paid "royalties" for their digital twins, much like musicians are paid for streams. Without such a system, we risk a "creative winter" where the very professionals who provide the high-quality data AI needs are driven out of business by their own digital ghosts.

As we navigate this transition, the industry must look toward models of transparency and consent. The outcome of this lawsuit will set the precedent for whether AI becomes a tool that empowers human experts or a parasite that consumes them.

References