By Dimitra Gatzelaki,
At around the end of May this year, Instagram and Facebook users in Europe began to receive notifications announcing Meta’s plans to use their public posts to train their Artificial Intelligence (AI) model. This notice, set to take effect from June 26th, simply informed users that the parent company will use “public information they have shared on Meta’s products and services to develop and improve AI at Meta within their respective privacy laws”. This involves information like pictures, posts, and captions shared publicly by users over 18 years old, but not the ones shared in private profiles.
From a legal standpoint, this action is valid: the General Data Protection Regulation (GDPR) deems that the processing of personal data is allowed if based on legitimate interest which, in the case of Meta, is covered by the “development and improvement” of its AI. However, as per EU data protection laws, users are given the chance to sit out by exercising their “right to object”. This involves filling out a form where the user states how this data processing would impact them, in short, to justify their right to keep their data safe. Of course, Meta’s way of designing this form might have been deliberate to dissuade users and thus maintain rights over their data. Regarding this, Noyb (European Center for Digital Rights) co-founder Max Schrems has said that Meta’s shifting of the responsibility to the user is “completely absurd”, calling this opt-out form “misleading”.
And yet, this opt-out regulation is limited to the EU and UK only. In other regions, such as Australia or Spanish-speaking Latin America, users don’t have the “luxury” to exclude their data from AI training. Perhaps this seemingly regional discrimination could be the trigger for global rules on how platforms handle user data. Until then, users —especially artists— in these regions will need to accept that sharing their work on platforms like Instagram means it will be used to train AI. There’s a paradox here: today, artists rely on social media to showcase their work, yet this also means they’re feeding into a potential automation of the creative process. For them, it might be a lose-lose situation.
Further, there’s the issue of what Meta is trying to achieve by feeding its AI model with public user data. Last September, Mark Zuckerberg announced its plan to implement a series of new chatbots into Messenger, but ones with “personality” and expertise in specific subjects. Whether one wants cooking advice, a language tutor, or even ways to salvage their marriage, Meta’s chatbot army will have them covered, drawing on its myriads of user data.
After the mild flop of their celebrity lookalike AI chatbots, Zuckerberg recently announced yet another tool, AI Studio. This time, Meta’s chatbots will be based on the users, allowing them to create AI versions of themselves (the feature is available in the US). But it seems that, with every novel technology that emerges, tens of problems sprout in its wake. Just where will that take us?
For social media, this shift in how public user data is used boils down to one thing— that we’re only just becoming fully aware of the true cost of what we’ve always thought of as “free” platforms. Creating a virtual AI chatbot of oneself is undeniably tempting. But who can promise that your digital self won’t be misused for purposes you never agreed to? As long as questions like these persist and companies like Meta continue to prioritize profit over user safety, their technological novelties —no matter how innovative— will continue to come with backlash.
References
- Plans to use Facebook and Instagram posts to train AI criticized. BBC. Available here
- Meta announces AI chatbots with ‘personality’. BBC. Available here
- Meta will soon use your public posts to train its AI. Can you prevent it? Euronews. Avalable here
- Meta is training its AI with public Instagram posts. Artists in Latin America can’t opt out. Rest Of World. Available here
- Meta moves on from its celebrity lookalike AI chatbots. The Verge. Available here