“Trust your AI” – Trustworthiness in the focus

The use of Artificial Intelligence (AI) transfers responsibility from humans to systems, which can create (potential) risks. For many companies, however, it is difficult to evaluate and control the risks associated with the use of AI. Working on the certification of AI applications within workshops on “Trustworthy AI” with Know-Center, Leftshift One and SGS, aims to remedy this by offering approaches to risk assessment and risk mitigation.

What exactly is meant by “Trustworthy AI”?

AI risks such as “bias”, unexplainable results (“black box”), ethical problems, or the lack of robustness against hostile attacks are well-known risks. Challenges for companies and AI developers are therefore the generation of suitable training data as well as the implementation of certain safety precautions to verify that the AI acts within the defined boundaries. Important aspects for companies who want to use AI applications are security, transparency, control measures, human intervention in the application, and data protection.

The theme “Trustworthy AI” attempts to summarize these requirements for AI applications. To ensure the trustworthiness of AI applications, the explanation of AI decisions or the robustness of the system are, for example, important requirements. The “Expert Group on Artificial Intelligence” (HEG-AI) set up by the European Union worked on this topic and drew up “Ethical Guidelines for Trustworthy AI“. Hereby, seven requirements for trustworthy AI were defined:

  1. Priority of human action and human supervision
  2. Technical robustness and security
  3. Data protection and data quality management
  4. Transparency
  5. Diversity, non-discrimination
  6. Social and environmental well-being
  7. Accountability

 

Certification creates trust!

Certification creates trust. However, the requirements of the ethics guidelines are not binding, but merely a recommendation. To increase trust in AI applications, several institutions (DIN, BSI, Fraunhofer or PwC, to name a few) are now working on certifications of AI applications. The certification is intended to create quality standards for an AI “Made in Europe” and to ensure responsible handling of AI applications.

A first – and rather comprehensive – test catalog has been published by Fraunhofer Institute. This AI test catalog splits trustworthy AI applications into six dimensions.

Table 2: The six dimensions of trustworthy AI applications along with the associated requirements

Dimension Requirements
Fairness The AI application must not lead to unjustified discrimination, e.g., due to unbalanced training data (keyword bias or underrepresentation).
Autonomy and control The autonomy of the AI application and the human must be guaranteed, e.g., “human-in-the-loop“.
Transparency The AI application must provide traceable, reproducible, and explainable decisions.
Reliability The AI application must be reliable, i.e., robust in terms of consistent output under changing input data.
Security The AI application must be secure in the sense of IT security, and protected against attacks and tampering.
Privacy The AI application must protect sensitive data (such as personal data or trade secrets).

The test catalog follows a risk-based approach in order to include possible risks from different points of view. This means that a protection requirement analysis is carried out for each dimension, which analyzes the impact of AI on the people or the environment. If the protection needs of a dimension are assessed to be low risk, the dimension does not need to be examined further. Instead, a justification must be made as to why the risk of the AI application was considered low with respect to the dimension. However, if the protection needs analysis concludes that the AI application is considered a medium or high risk, a detailed risk analysis must be performed along the risk areas. The documented answers to each dimension form the basis for certification.

Collaboration between research and production

The subject of a current research project deals with the design of the documentation of the different requirements as well as AI certification itself. Leftshift One is collaborating with Know-Center and SGS to perform the AI test catalog using a practical AI application. The research project should particularly provide information, whether the test catalog is sufficient to certify AI applications or if it needs to be revised.

 

Claudia Perus (Product Owner, Leftshift One), Philipp Nöhrer (Law & Compliance, Leftshift One)

Ethik & Recht

Berücksichtigt Ihre KI-Anwendung ausreichend gesellschaftliche Werte und erfüllt alle rechtlichen Anforderungen?
Das Business Analytics und Data Science Center der Universität Graz erforscht wie datenbasierte Technologien in der Wirtschaft eingesetzt werden können und welche gesellschaftlichen Auswirkungen diese haben. Das BANDAS Center bringt seine Expertise im Bereich Software Validierung, Auditing und Dokumentation von KI ein, wobei ethische und rechtliche Aspekte von KI besonders berücksichtig werden. Das Institute of Interactive Systems and Data Science der TU Graz deckt den Bereich “Ethics by Design” ab.