The author undertakes an analysis of the obligations of entities responsible for high-risk AI systems under Regulation 2024/1689 (AI Act), aiming to organize these duties and identify the legal consequences of their violation. The findings show that practical application of the regulation demands in-depth knowledge due to its complex structure and numerous cross-references. Although the obligations are formally grouped in one section, their broad scope and high level of detail raise doubts about their practical effectiveness. The author also tries to answer the question whether the legal protection measures provided under the AI Act in their current form are capable of ensuring legal security at an appropriate level. It is suggested that some of these mechanisms may require modification or expansion to effectively address the risks associated with high-risk AI systems and to better protect individual rights.
Cited by / Share
Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Roczniki Nauk Prawnych · ISSN 1507-7896 | eISSN 2544-5227 | DOI: 10.18290/rnp
© Towarzystwo Naukowe KUL & Katolicki Uniwersytet Lubelski Jana Pawła II
Articles are licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)