Example URL From our sponsor
Meta Moves to Dismiss Porn-Piracy Suit, Calls AI-Training Claims ‘Nonsensical’ - news.adtechsolutions Meta Moves to Dismiss Porn-Piracy Suit, Calls AI-Training Claims ‘Nonsensical’ - news.adtechsolutions

Meta Moves to Dismiss Porn-Piracy Suit, Calls AI-Training Claims ‘Nonsensical’



In short

  • Meta has asked a US court to dismiss a lawsuit by Strike 3 Holdings, accusing it of using corporate and hidden IPs to torrent nearly 2,400 adult films since 2018 for AI development.
  • Meta says the small number of alleged downloads points to “personal use” by individuals, not AI training.
  • The company denies using any adult content in its model, calling the AI ​​training theory “beliefs and innuendos.”

Meta has asked a US court to drop a lawsuit accusing it of illegally downloading and distributing thousands of pornographic videos to train its artificial intelligence systems.

Filed Monday in the United States District Court for the Northern District of California, the motion to dismiss argues that there is no evidence that Meta’s AI models contain or were trained on copyrighted material, calling the allegations “nonsensical and unsupported.”

The motion was first reported from Ars Technica on Thursday, Meta issued a direct denial saying the claims are “false.”

The plaintiffs have made “a great deal of effort to weave together this narrative with supposition and innuendo, but their claims are neither compelling nor supported by well-supported facts,” the motion says.

U original complaint was filed in July by Strike 3 Holdings and alleged Meta to use corporate and hidden IP addresses to torrent nearly 2,400 adult movies since 2018 as part of a broader effort to build multimodal AI systems.

Strike 3 Holdings is a Miami-based adult film holding company that distributes content under brands such as Vixen, Blacked and Tushy, among others.

Decrypt has reached out to Meta and Strike 3 Holdings, as well as their respective legal advisors, and will update this article if they respond.

Scale and model

Meta’s motion argues that the scale and pattern of the alleged downloads contradict Strike 3’s AI training theory.

Over seven years, only 157 of the Strike 3 movies were allegedly downloaded using Meta’s corporate IP addresses, an average of about 22 per year across 47 different addresses.

Meta attorney Angela L. Dunning characterized this as “lean, uncoordinated activity” by “shot individuals” doing it for “personal use,” and thus was not, as Strike 3 alleges, part of an effort by the tech giant to collect data for AI training.

The motion also rejects Strike 3’s claim that Meta used more than 2,500 “unknown” third-party IP addresses, and claims that Strike 3 did not verify who owned these addresses and instead made “correlations.”

One of the IP ranges is allegedly registered to a Hawaiian non-profit with no connection to Meta, while others have no identified owner.

Meta also argues that there is no evidence that he knew or could have stopped the alleged downloads, adding that he did not gain anything from them and that monitoring every file on his global network would be neither simple nor required by law.

Training safely

While Meta’s defense seemed “unusual” at first, it may also carry weight given that the core claim is based on how “the material was not used in any model training,” Dermot McGrath, co-founder of venture capital firm Ryze Labs, said. Decrypt.

“If Meta admits that the data was used in models, they should argue the fair use, justify the inclusion of pirated content, and open themselves to the discovery of their internal training and audit systems,” said McGrath, adding that instead of defending how it is supposed that the data was used, Meta denied “it was ever used.”

But if the courts accept such a defense as valid, it could open “a massive loophole,” McGrath said. It could “effectively undermine copyright protection for AI training data cases” so that future cases would require “stronger evidence of corporate direction, which companies would simply do better to hide.”

However, there are legitimate reasons to process explicit material, such as the development of security or moderation tools.

“Most big AI companies have ‘red teams’ whose job it is to probe models for weaknesses using malicious hints and try to get the AI ​​to generate explicit, dangerous or prohibited content,” McGrath said. “To build effective security filters, you need to train those filters on examples of what you’re trying to block.”

Generally intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Example URL From our sponsor