By Erika Koutroumpa,
ChatGPT was the reason why the generative AI boom took place right at the end of the decade. It is one of the fastest-growing consumer applications in the world. While being quite a controversial topic by itself, the company behind the computer program, OpenAI, has found itself in hot water after advocacy group NOYB filed a privacy complaint. The reason? The AI allegedly provided users with incorrect information, possibly breaching EU privacy laws.
The core of this case has to do with generative AI and the accuracy of personal data which comprise the online database. This dispute arose when the complainant, a public figure, inquired ChatGPT about their birthday, with the app providing them with false information. According to NOYB, the complainant requested to rectify or erase the data, but OpenAI dismissed it as impossible, also failing to disclose information about data processing or their information sources. Should EU data protection authorities comply with the request, an investigation regarding ChatGPT’s data processing and fact-checking methods for personal data will ensue. According to NOYB claims, generative AI machines must follow the European Law in place, not the opposite. If for some reason accurate data about individuals cannot be produced accurately and transparently, then false information should not be produced as search results.
ChatGPT‘s privacy policy varies according to the legislation of each jurisdiction. The GDPR has created a robust infrastructure for data protection handling of individuals in the European Union. According to this Act, legal entities must obtain informed and explicit consent from individuals for collecting, processing, and using their data. It also predicts that information about persons should be accurate and that they can have full access to the information stored and the source it was derived from (article 5 GDPR). Furthermore, article 16 of GDPR grants individuals the right to delete false information. OpenAI claims to comply with the regulations, as well as with the California Consumer Privacy Act, however many users feel apprehensive regarding the safety of their data under ChatGPT.
Ensuring that OpenAI adheres to unwavering data protection policies sounds like the ideal scenario, however, this seems to be out of reach due to the complexity of the issue. According to ChatGPT‘s privacy policy, the company behind it is responsible for managing and processing users’ data and has the right to share it with third parties. However, the application is black box software, meaning that no access to its code is available, and only the response to the input is validated. Hence, conducting audits is remotely impossible. Another issue in this case, since the program was engineered to create natural-sounding scripts by imitating the human speaking pattern, there is great difficulty in determining the copyright ownership which could be potential for fact-checking.
This is not the first time generative AI has marked clear data privacy violations. Multiple publications online have scrutinized the company’s privacy policy as “flimsy”, lacking transparency regarding how the data is being handled. A task force was established by the EU to enable countries to share information about ChatGPT violations with each other back in April of this year, but results are yet to be seen. It currently seems that even in countries where there is robust legislation, enforcing privacy regulations in the online sphere seems to be the biggest hurdle.
References
- Open AI’s ChatGPT targeted in Austria privacy complaint. Reuters. Available here
- Xiaodong Wu et al. “Unveiling security, privacy, and ethical concerns of ChatGPT”. Journal of Information and Intelligence. V 2.
- ChatGPT provides false information about people, and OpenAI can’t correct it. NOYB. Available here
- Chat GPT: Italy says OpenAI’s chatbot breaches data protection rules. BBC. Available here
- Right to Rectification. GDPR. Available here