The EU Commission recently presented the draft of the ‘Artificial Intelligence Act’ decree to regulate systems using artificial intelligence. The fundamental idea of steering AI’s evolution in the right direction and building trust in the long term is well accepted. However, consequentially there is a risk of competitiveness getting lost and the development of AI systems becoming tortuous and cost-intensive.
Obligations for providers and users
The decree’s main focus is regulating AI systems in high-risk sectors such as education, human resource management, critical infrastructure and medicine among others. In the future, compliant AI systems must be CE-branded while providers will be largely responsible for conformity assessments of AI systems in high-risk sectors. In some special and particularly high-risk sectors the assessment will be carried out by an external party obligatorily.
Providers must meet extensive compliance requirements like establishing risk and quality management, keeping technical documentation and logs, ensuring accuracy, resilience and cybersecurity adequately. While self-assessment is required for high-risk sectors only, all other applications are encouraged by the decree to be assessed as well.
Users (except for mere private use) are obligated to monitor AI-systems by notifying system providers of any risks or malfunctions and by suspending system operations if required.
Need for support in compliance testing
The Artificial Intelligence Act overall poses a sound starting point for setting global standards and advancing a secure and transparent digital EU-internal market. Avoiding a “patchwork” of national regulations is one of the main objectives of the decree. However, extensive obligations might cause uncertainty and overburdening among companies and reduce or even prevent the adding value of AI. Palpable support and funding proposals are crucial in order to avoid slowing down the innovative power of companies and the commercialization of research results.