0 likes | 2 Vues
Our whitepaper, u201cud835udde1ud835uddeeud835ude03ud835uddf6ud835uddf4ud835uddeeud835ude01ud835uddf6ud835uddfbud835uddf4 ud835udddbud835uddf6ud835uddf4ud835uddf5-ud835udde5ud835uddf6ud835ude00ud835uddf8 ud835uddd4ud835udddc ud835udde6ud835ude06ud835ude00ud835ude01ud835uddf2ud835uddfaud835ude00 ud835udde8ud835uddfbud835uddf1ud835uddf2ud835uddff ud835ude01ud835uddf5ud835uddf2 ud835uddd8ud835udde8 ud835uddd4ud835udddc ud835uddd4ud835uddf0ud835ude01, ud835udfeeud835udfecud835udfeeud835udff0u201d delivers strategic insights on aligning your governance frameworks, from data quality and human oversight to conformity assessments and post-market monitoring.
E N D
NAVIGATING HIGH RISK AI SYSTEMS UNDER THE EU AI ACT, 2024: AN INFORMATIVE GUIDE FOR BUSINESSES. WHITEPAPER JUNE’25 All rights reserved by Tsaaro Consulting www.tsaaro.com
Whitepaper- Navigating High Risk AI Systems under the EU AI Act, 2024 CONTENTS INTRODUCTION 1 2 IMPLEMENTING A RISK AND QUALITY MANAGEMENT SYSTEM (RQMS) 3 ENSURING DATA AND MODEL GOVERNANCE 4 HUMAN OVERSIGHT AND TECHNICAL ROBUSTNESS DOCUMENTATION, TRANSPARENCY, AND INSTRUCTIONS OF USE 5 6 CONFORMITY ASSESSMENT AND CE MARKING 7 POST-MARKET MONITORING, INCIDENT REPORTING, AND CONTINUOUS COMPLIANCE 8 CONCLUSION Page No. | 1 All rights reserved by Tsaaro Consulting
Whitepaper- Navigating High Risk AI Systems under the EU AI Act, 2024 1. INTRODUCTION European Union's Artificial Intelligence Act (hereinafter,referred to as the “EU AI Act”), enacted on August1, 2024, stands as a pioneering globalframework forregulating artificial intelligence and strengthens EU’s role as a leader in responsible AI governance. The Act aims to “foster responsible artificial intelligence development and deployment in the EU”, ensuring AI systems are safe, transparent, and aligned with fundamental rights. As of May 2025, the regulatory landscape is actively evolving, with prohibitions on unacceptable AI practices, coming in to effect since 2nd February 2025, and rules for general-purpose AI models set to apply from 2nd August 2025. The full scope of obligations for high-risk AI systems will take effect by 2nd August 2026, creating an urgent need for businesses to be prepared. The High-risk AI systems, as outlined in Annex III, include applications such as biometrics, critical infrastructure, law enforcement, and so on, which pose significant risks to health, safety, or fundamental rights of individuals. Non-compliance to the Act comes with severe consequences, including fines of up to €35 million or 7% of global annual turnover, it can also lead to exclusion from the EU market, and reputational damage. For organizations developing or deploying AI systems, proactive compliance will enhance organizational credibility, foster innovation within a regulated framework, and position them as leaders in the ethical AI market. This whitepaper is designed for business leaders, compliance officers, AI product stakeholders, and legal professionals seeking to understand the classification and regulatory landscape of high-risk AI systems under the EU AI Act. We provide a comprehensive guide which offers a clear and structured overview of the key concepts, obligations, and governance principles outlined in the regulation. By enhancing awareness of is whitepaper aims to empower organizations to make informed decisions these regulatory expectations, th and foster responsible and trustworthy AI practices. SOMETHINGTO WRITE 1.1OCVEEORVRIEimWbOerFioTHEEUAIACT The EU AI Act, published on 12 July 2024, is a landmark regulation governing AI systems across the European Union. It sets out four risk levels for AI systems: Unacceptable, High-risk AI systems, Limited risk and Minimal risk. The Act aims to ensure safety, transparency, and respect for individuals’ fundamental rights while fostering innovation. Compliance is critical for businesses to avoid penalties and maintain market access. Page No. | 2 All rights reserved by Tsaaro Consulting
Whitepaper- Navigating High Risk AI Systems under EU AI Act, 2024 1.2 WHAT QUALIFIES AS A “HIGH-RISK” AI SYSTEM (ARTICLE 6(2) R/W ANNEX III) High-risk AI systems are defined under Article 6(2) r/w Annex III, as those systems posing significant risks to health, safety, or fundamental rights of individuals. Annex III lists specific use cases, including: For instance, an AI system for remote biometric identification in public spaces is high-risk due to its potential impact on privacy. Such systems can reduce people’s ability to stay anonymous in public, enable mass surveillance, and unfairly target certain groups, creating serious risks to basic rights and personal freedoms of individuals. 1.3 WHY BUSINESSES NEED TO ACT NOW? Page No. | 3 All rights reserved by Tsaaro Consulting
Whitepaper- Navigating High Risk AI Systems under EU AI Act, 2024 The EU AI Act, introduces a comprehensive regulatory framework for artificial intelligence across the EU. Businesses must act now to comply with the Act because of the significant legal, financial, and reputational consequences of non-compliance. 1.4 KEY PLAYERS RESPONSIBLE: PROVIDERS, DEPLOYERS, IMPORTERS, DISTRIBUTORS The EU AI Act introduces a comprehensive governance structure that assigns specific roles and responsibilities to various actors involved in the lifecycle of an AI system. The key actors involved are: 1. ‘Provider’ means a natural or legal person, public authority, agency or other body that develops or has an AI system developed and places it on the market or puts the AI system into service under its own name or trademark, for payment or free of charge. 2. ‘Deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used for personal activity. 3. ‘Importer’ means a natural or legal person located or established in the Union that places on the market an AI system from outside the EU to sell or use in the EU. 4. ‘Distributor’ means a natural or legal person in the supply chain (other than the provider or the importer) that makes an AI system available on the Union market. These roles are essential for ensuring accountability, compliance, and transparency across the AI value chain. Expanded our client base Page No. | 4 All rights reserved by Tsaaro Consulting
Whitepaper- Navigating High Risk AI Systems under EU AI Act, 2024 2. IMPLEMENTING A RISK AND QUALITY MANAGEMENT SYSTEM (RQMS) The EU AI Act establishes a rigorous framework for high-risk AI systems, mandating a comprehensive Risk and Quality Management System (RQMS) under Articles 9 and 17 respectively. These systems are pivotal to ensuring that AI systems are safe, compliant, and respectful of fundamental rights throughout their lifecycle. 2.1 CORE REQUIREMENTS UNDER ARTICLES 9 AND 17 ARTICLE 9: RISK MANAGEMENT SYSTEM (RMS) Providers must implement a continuous RMS to identify, analyze, and mitigate risks to health, safety, or fundamental rights throughout the AI system’s lifecycle. This includes: Assessing risks from intended use and foreseeable misuse. Evaluating risks based on post- market monitoring data. Implementing targeted mitigation measures. Ensuring residual are acceptable, with focus on vulnerable groups like minors. Conducting testing, including real- world conditions where appropriate. CEO Rimberio risks ARTICLE 17: QUALITY MANAGEMENT SYSTEM (QMS) Providers of high-risk AI systems have aQMS in place that ensures compliance. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions. The QMS shall contain the following: Regulatory compliance strategies; Design, development, and testing procedures; Data management for training, validation, and testing; Risk management and post- market monitoring; Incident communication with authorities; Resource management and accountability frameworks. reporting and Expanded our client base All rights reserved by Tsaaro Consulting Page No. | 5
Download Full Whitepaper Here : NAVIGATING HIGH RISK AI SYSTEMSUNDER THE EU AI ACT, 2024