Perplexity did not respond to requests for comment.
In an emailed statement to WIRED, News Corp chief executive Robert Thomson compared Perplexity unfavorably to OpenAI. “We applaud principled companies like OpenAI, who understand that integrity and creativity are essential if we are to realize the potential of artificial intelligence,” the statement said. “Perplexity is not the only AI company to abuse intellectual property and it is not the only AI company that we will pursue vigorously and rigorously. We have made it clear that we prefer to woo rather than sue, but, for the sake of our journalists, our writers and our business, we must challenge the kleptocracy of content.”
OpenAI, however, faces its own accusations of brand dilution. In New York Times vs. OpenAItime alleged that ChatGPT and Bing Chat will attribute made-up quotes to the Times and accuses OpenAI and Microsoft of damaging its reputation by diluting its brand. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times called red wine (in moderation) a “heart-healthy” food, when in fact it was not; the Times maintains that it is real reports has refuted claims about the health benefits of moderate alcohol consumption.
“Copying news articles to exploit substitute commercial generative AI products is illegal, as we have made clear in our letters to Perplexity and in our litigation against Microsoft and OpenAI,” said Charlie Stadtlander, director of external communications. from the New York Times. “We welcome this lawsuit filed by Dow Jones and the New York Post as an important step in ensuring that publishers’ content is protected against this type of misappropriation.”
Some legal experts are unsure whether false designations of origin and trademark dilution charges are successful. Intellectual property lawyer Vincent Allen, a partner at Carstens, Allen & Gourley, believes the allegations of copyright infringement in this lawsuit are stronger and that he will be “surprised” if the misnomer charge d The origin is maintained. Allen and James Grimmelmann, a professor of digital and internet law at Cornell University, believe that the landmark trademark case, Dastar v. Twentieth Century Fox Film Corp., could thwart this line of attack. (In this decisionover a dispute over vintage images from World War II, the Supreme Court ruled that “origin” does not apply to authorship for trademark law, but rather is limited to tangible property – such as a pirated handbag – rather than counterfeit creative works like films. Additionally, Grimmelmann is skeptical that the brand dilution claim can hold water. “Dilution involves using a trademark on one’s own goods or services in a way that undermines the distinctive character of a famous trademark. I…I just don’t see that here,” he said.
If publishers prevail by arguing that hallucinations may violate trademark law, AI companies could face “tremendous difficulties,” according to Matthew Sag, a professor of law and artificial intelligence at the University. Emory.
“It’s absolutely impossible to guarantee that a language model won’t hallucinate,” says Sag. The way linguistic models work by predicting which words sound correct in response to prompts, he says, is always a kind of hallucination—sometimes it just seems more plausible than others.
“We only call it a hallucination if it doesn’t match our reality, but the process is exactly the same whether we like the outcome or not.”