Post by account_disabled on Mar 9, 2024 2:44:08 GMT -6
Regardless of whether or not the regulation favors the so-called Brussels effect, a greater financial commitment by the EU is necessary in order to finance the material and human technological infrastructure necessary to not be left behind" 19 January 2024 Blog The Confideinitial, Artificial Intelligence / Artificial Intelligence, News On December 8, 2023, the European institutions (Commission, Council and Parliament) finally reached a historic political agreement on the regulation of artificial intelligence (IA), a pioneering regulation in the world that aspires to become a global standard for AI regulation in other jurisdictions. The final text is expected to be approved in early 2024 and will be payable in full within a period of two years, with some exceptions for specific provisions, such as prohibitions on the use of AI, which will apply six months after entry into force. According to the details that have been revealed of the provisional agreement, the definition of AI system seems to be the one already introduced by the European Parliament amendments to the Commission's text, which is aligned with the approach proposed by the OECD. Likewise, systems used exclusively with military or defense purposes, research and innovation, and the use of AI systems for non-professional purposes by individuals.
Nor will it affect the powers of the Member States in matters of national security. The regulation establishes a risk-based approach what does it entail the use of AI systems, distinguishing between four categories. They are prohibited unacceptable risk systems for being considered a clear threat to the fundamental rights and values of the European Union. This category includes, among others, social classification systems o social scoring that rate citizens based on their behavior and reputation or AI systems for emotion recognition in workplaces and UK Phone Number educational institutions. The AI systems high risk They will be subject to obligations that they must comply with before and after they are placed on the market. For example, the preparation of detailed documentation, traceability, human supervision or ciberseguridad, as well as the implementation of quality and risk management systems. These are systems that can negatively affect fundamental rights or the safety of people if they are not used properly. These would include AI systems related to the provision of critical infrastructure.
The vast majority of AI systems we use today fall into the category of minimal risk systems, for which the regulation does not establish additional obligations, but encourages them to adopt voluntary standards of conduct to promote citizen trust. Finally, systems that present a limited risk They will mainly be subject to transparency obligations. Fundamentally, they must reveal that the content was generated by AI so that users can make informed decisions about their further use. The obligations and responsibilities are required of the different agents of the value chain of AI systems, fundamentally to the suppliers and users or so-called deployers. The approach, in short, seeks the no dilution of responsibility through the supply chain of this type of services. One of the most controversial points in the negotiations has been that of general purpose AI (GPAI) systems. In the first versions only its categorization would fit as mere chatbots and, consequently, subject only to transparency requirements. However, the explosion of the most popular applications such as ChatGPT or Bard led legislators to reconsider the way in which these powerful systems should be regulated. From this perspective, the regulation introduces specific rules focused not on the use, but on the power and capabilities of the AI systems and models on which they are based, which has sparked criticism from those who argue that technological competitiveness in the EU is being hindered.
Nor will it affect the powers of the Member States in matters of national security. The regulation establishes a risk-based approach what does it entail the use of AI systems, distinguishing between four categories. They are prohibited unacceptable risk systems for being considered a clear threat to the fundamental rights and values of the European Union. This category includes, among others, social classification systems o social scoring that rate citizens based on their behavior and reputation or AI systems for emotion recognition in workplaces and UK Phone Number educational institutions. The AI systems high risk They will be subject to obligations that they must comply with before and after they are placed on the market. For example, the preparation of detailed documentation, traceability, human supervision or ciberseguridad, as well as the implementation of quality and risk management systems. These are systems that can negatively affect fundamental rights or the safety of people if they are not used properly. These would include AI systems related to the provision of critical infrastructure.
The vast majority of AI systems we use today fall into the category of minimal risk systems, for which the regulation does not establish additional obligations, but encourages them to adopt voluntary standards of conduct to promote citizen trust. Finally, systems that present a limited risk They will mainly be subject to transparency obligations. Fundamentally, they must reveal that the content was generated by AI so that users can make informed decisions about their further use. The obligations and responsibilities are required of the different agents of the value chain of AI systems, fundamentally to the suppliers and users or so-called deployers. The approach, in short, seeks the no dilution of responsibility through the supply chain of this type of services. One of the most controversial points in the negotiations has been that of general purpose AI (GPAI) systems. In the first versions only its categorization would fit as mere chatbots and, consequently, subject only to transparency requirements. However, the explosion of the most popular applications such as ChatGPT or Bard led legislators to reconsider the way in which these powerful systems should be regulated. From this perspective, the regulation introduces specific rules focused not on the use, but on the power and capabilities of the AI systems and models on which they are based, which has sparked criticism from those who argue that technological competitiveness in the EU is being hindered.