Overview

On March 13, 2026, the landscape of generative AI and intellectual property rights faced a seismic shift as Grammarly, the ubiquitous writing assistance platform, became the target of a high-profile class action lawsuit. The legal action, spearheaded by professional writers and editors, alleges that Grammarly’s "Expert Review" feature—a premium service marketed as providing human-level editorial feedback—was actually powered by AI models trained specifically to "clone" the unique styles, logic, and professional expertise of human editors without their consent or compensation.

The controversy reached a boiling point when reports from WIRED and TechCrunch revealed that the plaintiffs are characterizing this practice as a form of "digital identity theft." They argue that Grammarly didn't just use their data to improve a general language model, but specifically harvested the nuanced, creative decision-making processes of professional editors to create a proprietary "AI Expert" that effectively replaces the very humans it was trained on. In a swift response to the mounting legal pressure and public outcry, Grammarly announced it would immediately disable the controversial feature, as reported by The Verge.

This case marks a significant evolution in AI litigation. While previous lawsuits against companies like OpenAI or Midjourney focused on the broad use of copyrighted text or images for training, the Grammarly lawsuit focuses on the appropriation of professional identity and specialized skill sets. It asks a fundamental question for the AI era: Does a company have the right to distill a human's professional essence into a software product? As we have seen in other sectors, such as the automation of leadership roles at Uber, the boundary between human decision-making and algorithmic execution is becoming increasingly blurred, and the legal system is finally beginning to catch up.

Details

The Allegations: From Human Feedback to AI Cloning

According to the complaint filed in a California federal court, Grammarly’s "Expert Review" feature was marketed to users as a way to get "expert-level" polish on their writing. For years, Grammarly employed a network of freelance editors and writers to provide manual reviews. However, the lawsuit alleges that behind the scenes, Grammarly was using these human interactions as a "gold mine" for training data. Every correction made, every stylistic suggestion offered, and every grammatical nuance explained by these professionals was meticulously logged and fed into a specialized machine learning pipeline.

The plaintiffs, led by a prominent editorial consultant, claim that Grammarly’s AI began to exhibit "uncanny" similarities to their specific editorial voices. The lawsuit states that the AI wasn't just learning grammar; it was learning the judgment of specific individuals. "They didn't just build a better spellchecker; they built a digital version of me and sold it for $30 a month," one plaintiff remarked in the TechCrunch report. This transition from "AI-assisted tools" to "AI-cloned professionals" is the crux of the legal argument.

Grammarly’s Defense and Immediate Retreat

In the wake of the filing, Grammarly initially defended its practices, stating that its AI development adheres to industry standards for "quality improvement." However, as the The Verge reported, the company quickly pivoted, announcing the suspension of the "Expert Review" feature. In an official statement, Grammarly claimed it would "stop using AI to clone experts without explicit permission," though it stopped short of admitting any legal wrongdoing. The company maintains that the feature was intended to provide accessible high-quality editing to those who could not afford human services, but acknowledged that the "opt-out" mechanisms for the editors whose data was used were insufficient or non-existent.

The technical infrastructure required to perform such high-fidelity cloning of human expertise is massive. Just as Meta is investing $100 billion in AMD chips to fuel "personal superintelligence," companies like Grammarly have been quietly scaling their compute power to move beyond generic LLMs toward highly specialized, "persona-driven" AI. The lawsuit suggests that this specialization is exactly what makes the practice illegal, as it moves the AI from the realm of "fair use" into the realm of "misappropriation of likeness."

The Legal Precedent: Intellectual Property vs. Skill Sets

Legal experts suggest that this case could redefine "Transformative Use" under copyright law. If a model is trained on a million books to learn how to speak, it is often seen as transformative. However, if a model is trained on one thousand edits by one specific person to learn how to edit like that person, it may violate the right of publicity or create a breach of contract regarding the original terms of employment. This distinction is critical as the industry shifts toward more efficient architectures, such as Inception Labs’ Mercury 2, which allow for faster and more precise reasoning, potentially making the cloning of human logic even more effective and common.

Discussion (Pros/Cons)

Pros of AI-Driven Expertise

  • Democratization of Quality: AI models that mimic expert editors allow students, non-native speakers, and small business owners to access high-level editorial feedback that would otherwise be cost-prohibitive.
  • Scalability and Speed: Unlike human editors, an AI "Expert Review" can process thousands of documents simultaneously, providing near-instant feedback 24/7.
  • Consistency: AI models don't suffer from fatigue or mood swings, ensuring that the "editorial voice" applied to a document remains consistent throughout a 500-page manuscript.

Cons and Ethical Concerns

  • Economic Cannibalization: By creating an AI that mimics human experts, companies are essentially using the labor of their contractors to build a product that will eventually make those contractors obsolete. This creates a "parasitic" relationship between AI developers and creative professionals.
  • The Erosion of Identity: If a person’s professional "style" can be harvested and sold, the concept of individual expertise is devalued. This leads to a future where "being an expert" is merely a temporary state before being digitized.
  • Lack of Consent: The primary ethical failure in the Grammarly case is the lack of transparency. Editors believed they were providing a service to clients, not providing the blueprint for their own digital replacement.
  • Legal Volatility: As seen with the shifting loyalties of AI investors, the industry is in a state of flux. Aggressive data harvesting can lead to massive legal liabilities that threaten the stability of even established companies like Grammarly.

Conclusion

The Grammarly class action lawsuit is a watershed moment for the AI industry, signaling the end of the "Wild West" era of data scraping. For years, AI companies have operated under the assumption that any data they could touch was theirs to train on. The backlash against the "Expert Review" feature proves that when AI begins to infringe upon the very identity and livelihood of the people who provide its training data, the public and the courts will push back.

This case will likely force a major pivot in how enterprise AI is developed. We are moving toward a model where "Ethical Sourcing" of data will be as important as the algorithms themselves. Companies like those in the OpenAI Frontier Alliance are already beginning to prioritize transparent data partnerships to avoid the exact type of litigation Grammarly is currently facing.

Ultimately, the suspension of Grammarly's "Expert Review" serves as a warning: AI should be a tool that augments human capability, not a mirror that steals it. As we continue to monitor the fallout from this case, one thing is clear—the definition of "theft" in the 21st century now includes the unauthorized replication of the human mind's professional output.

References