Brazil’s data protection authority, Autoridade Nacional de Proteção de Dados (ANPD), has temporarily banned Meta from processing users’ personal data to train the company’s artificial intelligence (AI) algorithms.
The ANPD said it found “evidence of processing of personal data based on inadequate legal hypothesis, lack of transparency, limitation of the rights of data subjects, and risks to children and adolescents.”
The decision follows the social media giant’s update to its terms that allow it to use public content from Facebook, Messenger, and Instagram for AI training purposes.
A recent report published by Human Rights Watch found that LAION-5B, one of the largest image-text datasets used to train AI models, contained links to identifiable photos of Brazilian children, putting them at risk of malicious deepfakes that could place them under even more exploitation and harm.
Brazil has about 102 million active users, making it one of the largest markets. The ANPD noted the Meta update violates the General Personal Data Protection Law (LGBD) and has “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects.”
Meta has five working days to comply with the order, or risk facing daily fines of 50,000 reais (approximately $8,808).
In a statement shared with the Associated Press, the company said the policy “complies with privacy laws and regulations in Brazil,” and that the ruling is “a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil.”
The social media firm has received similar pushback in the European Union (E.U.), forcing it to pause plans to train its AI models using data from users in the region without getting explicit consent from users.
Last week, Meta’s president of global affairs, Nick Clegg, said that the E.U. was losing “fertile ground for innovation” by coming down too hard on tech companies.