The aim of this article is the analyze the risks to the protection of personal data arising from their processing by AI systems and to identify solutions to minimize these risks. The study uses a dogmatic-legal method, based on an analysis of existing regulations and literature on the subject. The results of the study indicate significant gaps in the regulations and the need to implement protective measures, such as data anonymization and algorithm audits. The findings can form the basis for further research and legislative action on data protection in AI systems.
Cited by / Share
Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Roczniki Nauk Prawnych · ISSN 1507-7896 | eISSN 2544-5227 | DOI: 10.18290/rnp
© Towarzystwo Naukowe KUL & Katolicki Uniwersytet Lubelski Jana Pawła II
Articles are licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)