Meta Oversight Board Slams Parent Company Over Viral Ronaldo Deepfake

by shayaan

In short

  • Meta’s supervisory board said that the company should have removed a DeepFake advertisement from the Brazilian football player Ronaldo Nazário.
  • The post promoted a misleading online game and misleading viewers.
  • The decision emphasizes the inconsistent enforcement of meta of fraud policy in the midst of growing concern about AI abuse.

The Meta supervisory board has ordered the removal of a Facebook message with an AI-manipulated video from the Brazilian football legend Ronaldo Nazário who promoted an online game.

The board said that the position of Meta’s community standards has violated fraud and spam and criticized the company to allow the misleading video to stay online.

“Conducting the mail is consistent with Meta’s community standards about fraud and spam. Meta should also have rejected the content for advertisement, because the rules prohibit the image of a famous person to lure people to enter into an advertisement,” said the supervisory board in a rack Thursday.

The Oversight Board, an independent authority that assesses decisions of the content of the content of Facebook Parent Meta, has the authority to return decisions about enforcement or returning the deletions and can make recommendations to which the company must respond.

It was founded in 2020 to offer responsibility and transparency for Meta’s enforcement actions.

The case emphasizes a growing concern about images generated by AI that people wrongly portrayed and portray as things they have never done or do.

They are increasingly being used for scams, fraud and wrong information.

In this case, the video depicted a poorly synchronized voice -over from Ronaldo Nazário and for users to play a game with the name Plinko via the app, falsely promise that users can earn more than by doing common tasks in Brazil.

See also  Bitcoin Miners Notch Gains as Meta Signs 20-Year AI Deal With Nuclear Plant

The post collected more than 600,000 views before it was marked.

But even though it was reported, tackling the content did not get prioritized and it was not removed.

The user who reported it then appealed against the decision as Meta, where again it was not a priority for human assessment. The user finally went to the board.

Deepfakes in the turnout

This is not the first time that Meta has to do with criticism of the treatment of celebrity deepfakes.

Last month, actress Jamie Lee Curtis confronted CEO Mark Zuckerberg on Instagram after her parable was used in an advertisement generated by AI, so that Meta switches off the advertisement but leaves the original message online.

The board discovered that only specialized teams at Meta could remove this type of content, which indicates widespread award. At Meta it urged his anti-fraud policy to apply more consistent on the platform.

The decision comes between a broader legislative momentum to curb the abuse of deep fakes.

In May, President Donald Trump signed The two-part act Down Act, where platforms are mandatory that non-consensual, intimate, AI-generated images remove within 48 hours.

The law responds to an increase in deepfake pornography and image-based abuse that affects celebrities and minors.

Trump himself was intended Due to a viral deep food this week, from which he would have him plead for dinosaurs to guard the southern border of the US.

Edited by Sebastian Sinclair

Source link

Related Posts