{"id":1243,"date":"2022-02-28T11:05:59","date_gmt":"2022-02-28T10:05:59","guid":{"rendered":"https:\/\/trustyour.ai\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/"},"modified":"2022-03-02T10:49:55","modified_gmt":"2022-03-02T09:49:55","slug":"certifying-fairness-of-ai-applications-an-impossible-task","status":"publish","type":"page","link":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/","title":{"rendered":"Certifying Fairness of AI-Applications An Impossible Task?"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"1243\" class=\"elementor elementor-1243 elementor-1155\" data-elementor-post-type=\"page\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-855747a elementor-section-content-middle elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"855747a\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-d38d4e4\" data-id=\"d38d4e4\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-3839591 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"3839591\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-1ba91c7\" data-id=\"1ba91c7\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-f3d779f elementor-widget elementor-widget-text-editor\" data-id=\"f3d779f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h2>INTRODUCTION<\/h2><p>Since more and more decisions that used to be made by humans are nowadays made either with the help of Artificial Intelligence (AI) or by AI alone, it is essential that the AI algorithms are \u201cfair.\u201d In this whitepaper, we discuss issues surrounding the fairness of AI applications, with a special focus on how it can be assessed independently and subsequently certified. We explain why an AI application cannot be classified \u2013 and subsequently certified \u2013 as \u201cfair\u201d or \u201cunfair\u201d in a general sense and propose an approach that makes it possible to classify it as \u201cfair\u201d or \u201cunfair\u201d under the (application)-specific\u00a0<span style=\"color: var( --e-global-color-text ); font-family: var( --e-global-typography-text-font-family ), Sans-serif;\">definition of fairness.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-8224a7a\" data-id=\"8224a7a\" data-element_type=\"column\" data-e-type=\"column\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-d0dac9a elementor-widget elementor-widget-text-editor\" data-id=\"d0dac9a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><a href=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/Whitepaper-AI-Certification-2022-02.pdf\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" class=\"aligncenter wp-image-1131\" src=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/whitepaper-cover.jpg\" alt=\"\" width=\"141\" height=\"200\" srcset=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/whitepaper-cover.jpg 604w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/whitepaper-cover-212x300.jpg 212w\" sizes=\"(max-width: 141px) 100vw, 141px\" \/><\/a><\/p><p style=\"text-align: center;\"><a href=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/Whitepaper-AI-Certification-2022-02.pdf\" target=\"_blank\" rel=\"noopener\"><b>Download Whitepaper<br \/><\/b><\/a>(pdf, 5 MB)<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-c60d9d3 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"c60d9d3\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-d185b24\" data-id=\"d185b24\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-a276791 elementor-widget elementor-widget-image\" data-id=\"a276791\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"1536\" height=\"1132\" src=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-cover-1536x1132.jpg\" class=\"attachment-1536x1536 size-1536x1536 wp-image-1205\" alt=\"\" srcset=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-cover-1536x1132.jpg 1536w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-cover-407x300.jpg 407w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-cover-1110x818.jpg 1110w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-cover-768x566.jpg 768w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-cover.jpg 1800w\" sizes=\"(max-width: 1536px) 100vw, 1536px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8c255f3 elementor-widget elementor-widget-text-editor\" data-id=\"8c255f3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h2>Mission Impossible?<\/h2><p>As Artificial Intelligence (AI) is becoming increasingly popular, concerns about its application are rising. One of the most controversial issues is that AI-based applications could result in an unfair outcome, e.g., by favoring men over women or people of a specific ethnicity over people of other ethnicities. Issues like that can be addressed by establishing ways of determining whether an AI-based application is \u201cfair\u201d<sup><a href=\"#literatur_1\">1<\/a><\/sup>,<sup><a href=\"#literatur_2\">2<\/a><\/sup>. However, this is a very difficult task since \u201cfairness\u201d is difficult to define.\u00a0<\/p><p>Most people have a good sense of fairness and &#8211; even more so &#8211; unfairness when evaluating existing outcomes. For example, if an algorithm used for job applications consistently rejects applications by women, most people would call it unfair. However, the question \u201cWhat is fair?\u201d is much harder to answer, and people are typically less willing to provide quick and precise responses to it. One potential response in our example could be \u201cThe algorithm should pick an equal number of men and women.\u201d Intuitively, this sounds fair. But what if there are significantly more male applicants than female ones? 10 out of 10 women and 10 out of 50 men would appear unfair to most people. It may be tempting to make it right by taking the same fraction of each, i.e., 2 out of 10 women and 10 out of 50 men. But what if there are big differences in the educational backgrounds among female and male applicants? What would a fair outcome be in that case? Another intuitive answer could be \u201cMen and women should be treated equally.\u201d But how can equality be verified? And wouldn\u2019t this be contrary to the intuitive fairness idea that a roughly equal number of men and women should be considered? Even this simple example shows how challenging it is to find objective ways of certifying something as \u201cfair.\u201d\u00a0<\/p><p>Another fundamental issue is that such concepts as fairness are deeply rooted in cultural beliefs, i.e., what is considered fair or unfair varies significantly between different countries, regions, ethnic groups, age groups, etc. In addition, the notion of fairness can &#8211; and most probably will \u2013 evolve and change over time. All this makes certifying something as \u2018fair\u2019 independently and objectively almost impossible.<\/p><p>Nevertheless, there clearly is a demand for ascertaining the fairness of applications since fairness is closely linked to non-discrimination, which is a fundamental right according to Article 21 of EU\u2019s<sup><a href=\"#literatur_3\">3<\/a><\/sup> Charter of Fundamental Rights. Non-discrimination also plays an important role in the recently proposed EU AI regulation, and the EU\u2019s Fundamental Right Agency (FRA) has published several reports on algorithmic fairness and related issues<a href=\"#literatur_4\"><sup>4<\/sup><\/a>,<sup><a href=\"#literatur_5\">5<\/a><\/sup>,<sup><a href=\"#literatur_6\">6<\/a><\/sup>. For businesses, preventing legal problems and public setback is an important incentive for ensuring that their AI applications are fair and non-discriminatory.\u00a0<\/p><p>In this paper, we provide an overview of ways to certify the fairness aspects of AI applications and demonstrate how the problem of objective fairness certification can be solved.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-13c23b3 elementor-widget elementor-widget-image\" data-id=\"13c23b3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"1536\" height=\"864\" src=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/whitepaper-defining-fairness-1536x864.png\" class=\"attachment-1536x1536 size-1536x1536 wp-image-1198\" alt=\"\" srcset=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/whitepaper-defining-fairness-1536x864.png 1536w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/whitepaper-defining-fairness-534x300.png 534w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/whitepaper-defining-fairness-1110x624.png 1110w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/whitepaper-defining-fairness-768x432.png 768w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/whitepaper-defining-fairness.png 1800w\" sizes=\"(max-width: 1536px) 100vw, 1536px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-84edbda elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"84edbda\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-b6ae0fd\" data-id=\"b6ae0fd\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-1010e04 elementor-widget elementor-widget-text-editor\" data-id=\"1010e04\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h2>Defining Fairness<\/h2><p>In the introduction, we have mentioned some fundamental issues with defining fairness (e.g., different intuitive fairness definitions that contradict each other and the general idea of fairness being significantly affected by culture).<\/p><p>Based on the realization that different concepts of fairness may be contradictory, two main concepts of fairness have been established: group fairness and individual fairness<sup><a href=\"#literatur_7\">7<\/a><\/sup>.<\/p><p>Group fairness means that groups as a whole are treated equally. In our example of job application algorithm described in the introduction, this means that the group of women as a whole is treated equally to the group of men. In this case, \u201cequal treatment\u201d could mean that an equal fraction of both is accepted or that, on average, the quality of predictions is the same for both groups (e.g., the fraction of excellent candidates erroneously rejected by the algorithm).<\/p><p>Individual fairness means that individuals are considered treated equally if they are equal regardless of the attributes (e.g., gender, ethnicity). The meaning of \u201cequal individuals\u201d depends on the context and the application. In our example of job application algorithm, individual fairness means that individuals with equal attributes relevant for the job (e.g., education, experience, grades) are treated equally and independently of their gender, etc.<\/p><p>Individual fairness, which is closely related to how non-discrimination is legally defined, sounds very intuitive at first. However, there are lots of issues associated with it. The available attributes (e.g., experience in our example) cannot always be assessed objectively due to existing inequalities. For example, degrees obtained from certain universities may be valued more than others even if objectively there seems to be no difference. In addition, even if they are assessed objectively, differences could originate from existing inequalities: a person may have less job experience because of discriminatory employment practices, just like a less educated person may have less education not for the lack of talent and motivation but rather because they belong to a certain population group whose access to education is limited. The last example leads to another very important dilemma: in our application example, where do we draw the line when assessing fairness? If two groups have very different levels of education due to systematic discrimination and yet a certain education level is essential for assessing the candidate\u2019s suitability for a certain job, is the algorithm that is based on that desired education level and will consequently select more people from the group with higher average higher education level (the \u201eprivileged\u201c group) unfair? While from the societal point of view this would mostly be considered unfair, it is very difficult to decide whether the party that uses the algorithm is responsible for ensuring the fairness that goes beyond the scope of its hiring algorithm (equal chances for that particular job based on the education level or a wider scope of equal educational opportunities) and whether it is responsible for \u201cremedying\u201d the educational inequalities artificially by favoring individuals from the underprivileged group (in the case of gender, the EU law would explicitly permit this (Art. 21, p. 2 of the Fundamental Rights Charter<sup><a href=\"#literatur_8\">8<\/a><\/sup>)).<\/p><p>The most important thing to realize is that individual fairness and group fairness are generally contradictory. Despite the ongoing philosophical discussion on whether those two concepts are in fact separate concepts or mainly a matter of worldview<sup><a href=\"#literatur_9\">9<\/a><\/sup>, in practical settings they are clearly distinct.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-39ac9d1 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"39ac9d1\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-979d6e1\" data-id=\"979d6e1\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-41611fd elementor-widget elementor-widget-text-editor\" data-id=\"41611fd\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h2>Measuring Fairness<\/h2><p>In response to the rising awareness of AI-related fairness issues, the research community has developed a wide range of quantitative fairness and bias metrics<sup><a href=\"#literatur_10\">10<\/a><\/sup> many of which are already integrated open-source toolkits<sup><a href=\"#literatur_11\">11<\/a><\/sup>,<sup><a href=\"#literatur_12\">12<\/a><\/sup>,<sup><a href=\"#literatur_13\">13<\/a><\/sup>,<sup><a href=\"#literatur_14\">14<\/a><\/sup>, which assess group fairness, individual fairness or a balanced combination of the two. However, as evidenced by a large number of proposed and applied fairness metrics, there exists no single \u201cideal\u201d fairness measure due to the challenge of defining fairness described above.<\/p><h5>Equality-in-Outcome Approach<\/h5><p>Statistical parity difference (spd) is a group-fairness measure defined as:<\/p><p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1163 size-full\" src=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/Equality-in-Outcome-Approach-measure.gif\" alt=\"\" width=\"1091\" height=\"81\" \/><\/p><p>which can also be written as probabilities:<\/p><p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1165 size-full\" src=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/Equality-in-Outcome-Approach-propabilities.gif\" alt=\"\" width=\"1091\" height=\"42\" \/><\/p><p>How exactly the distinction between privileged and underprivileged groups is made depends on the context of the application. spd compares the fractions of each group classified as 1 (a positive classification, meaning an invitation to a job interview in our hiring algorithm example). This comparison is independent of the group size. In this metric, perfect equality is reached at a value of zero, which means that the same fraction of people is chosen from both groups. For example, if 20% of the overall population are invited for a job interview, in order to be fair 20% of each group should be invited regardless of the group size. This is often described as a \u201cwe-are-all-equal\u201d approach since it assumes that there is no difference between the groups even if there is a difference in the data. One main drawback of spd is that it does not address true or false classification of individuals.<\/p><h5>Equality-in-Quality Approach<\/h5><p>The average odds difference (aod) is another group fairness measure. In contrast to spd, it does not consider classifications as such (e.g., acceptance rates in our job application algorithm example) but rather the quality of the classifications and measures whether the quality rate defined via the false positive (the fraction that is incorrectly classified as 1) and the positive rate (the fraction that is correctly classified as 1) differs between the groups.<\/p><p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1161 size-full\" src=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/Equality-in-Quality-Approach.gif\" alt=\"\" width=\"1091\" height=\"61\" \/><\/p><p>Perfect equality is indicated by a zero. This is often described as a \u201cwhat-you-see-is-what-you-get\u201d worldview since it does not assume that an equal fraction from all groups must be accepted, only that the quality of the predictions should be the same for each group. Under this approach, the algorithm can choose only a very small fraction of one of the groups as long as it corresponds to the underlying data. In other words, this approach assumes that the data itself is not biased and accurately reflects reality, ensuring that no bias is introduced by the AI algorithm.<\/p><p>Besides these two examples, there are many other fairness notions, such as conditional statistical parity, equal opportunity, fairness through unwareness, fairness through awareness and counterfactual fairness (https:\/\/arxiv.org\/pdf\/1908.09635v1.pdf).<\/p><p>Below is a summary of issues related to fairness of AI applications:<\/p><ol><li>There is no single concept of fairness.<\/li><li>Definitions of fairness often contradict each other.<\/li><li>Setting the boundary for fairness issues in an application is non-trivial and ambiguous.<\/li><li>All of the above points are tightly coupled with ethical considerations and worldviews that depend on many factors, such as region and culture, and can change over time.<\/li><\/ol><p>In the remaining part of this paper, we discuss how to overcome those obstacles when certifying AI applications in term of fairness.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-fa790a9 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"fa790a9\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-7a8484e\" data-id=\"7a8484e\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-68abddd elementor-widget elementor-widget-text-editor\" data-id=\"68abddd\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h2>Certifying Fairness<\/h2>\n<p>When developing a certification process, it is customary to establish general requirements for a broad range of applications and subsequently test if each application meets them. However, due to the above-mentioned intricacies surrounding fairness, this method cannot be applied to certifying the fairness of AI applications. In this case, a much more flexible approach is required. Developers of an AI application cannot simply test their application against the fairness criteria since they first have to define them for their application<sup><a href=\"#literatur_15\">15<\/a><\/sup>. This has to be accomplished in a transparent and comprehensible way. Moreover, the details of the application and, most importantly, the setting in which the application is intended to be used must be considered. Questions that need to be answered as a prerequisite for certifying fairness are:<\/p>\n<ul>\n<li>What are the potential fairness issues in the application context?<\/li>\n<li>What are the vulnerable (potentially unfairly treated) groups?<\/li>\n<li>Could there be fairness issues that indirectly (rather than directly) relate to the application?<\/li>\n<li>Could there be undesirable effects in the long term that are not obvious in the short term?<\/li>\n<li>In which region is the application intended to be used?<\/li>\n<li>What is the legal context?<\/li>\n<li>How fair is the status quo (not using the AI application in question)?<\/li>\n<li>Which means to prevent fairness issues does the application contain?<\/li>\n<\/ul>\n<p>This list is only a starting point and in no way exhaustive. Given the wide range of possible issues and the subtlety of this topic, it is impossible to compile a complete list of questions. Rather, the questions have to be adapted to the given context. One of the most important things to realize is that potential problems are not always obvious at first glance. Therefore, even directions that seem to be absolutely prone to fairness issues should be considered.<\/p>\n<p>The matters discussed above are not the concern of the certifying party but rather of the application developer\/supplier. From a certifier\u2019s point of view, one of the most important concepts is that fairness certification only makes sense if in the end the result communicated to the users is not whether \u201cthis application is fair\u201d but rather that \u201cthis application is fair in the following sense, etc.,\u201d with a clear and easy-to-follow description of how fairness was defined in that specific application and context.<\/p>\n<p>The tasks of the certifying party are:<\/p>\n<ul>\n<li>Review the fairness approach chosen by the developer. Does it make sense? Have any potentially harmful issues been left out?<br>Review the actual measures that were taken (e.g., compute fairness metrics, evaluate processes).<\/li>\n<li>Review the complete application (potentially including the code if it is available) considering the appropriate definition of fairness.<\/li>\n<li>Create a final fairness report, which describes the fairness definition(s) used in an easily understandable way and how these definitions were met in the application<\/li>\n<\/ul>\n<p>The process is depicted in figure 1:<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1a26307 elementor-widget elementor-widget-image\" data-id=\"1a26307\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"208\" src=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/Overview-of-AI-fairness-certification-from-the-perspectives-of-both-AI-developers-and-certifiers.gif\" class=\"attachment-large size-large wp-image-1172\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\"> Figure 1: Overview of AI fairness certification from the perspectives of both AI developers and certifiers.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-15022a9 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"15022a9\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-4a4800d\" data-id=\"4a4800d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-0cd01aa elementor-widget elementor-widget-text-editor\" data-id=\"0cd01aa\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h2>Mission Possible!<\/h2><p>In this whitepaper, we discussed challenges of certifying fairness aspects of AI applications. We outlined ideas of how these challenges can be addressed and how the fairness certification of AI can be performed.<\/p><p>However, many issues still need to be considered in order to make a routine certification of AI fairness possible, including:<\/p><ol><li>Development of reliable (semi-)automatic tools, e.g., tools that automatically compute and visualize fairness and bias metrics.<\/li><li>Development of clear guidelines for certifying fairness of AI applications, including international standards.<\/li><li>Define case studies that can serve as a reference for certification (e.g.,<sup><a href=\"#literatur_16\">16<\/a><\/sup>).<\/li><\/ol><p>Furthermore, from the legal perspective, there is the problem that the existing quantitative approaches (such as the fairness metrics) are not always in line with how courts deal with fairness and discrimination issues (\u201cA clear gap exists between statistical measures of fairness and the context-sensitive, often intuitive and ambiguous discrimination metrics and evidential requirements used by the Court [European Court of Justice].\u201d <sup><a href=\"#literatur_17\">17<\/a><\/sup>). This issue needs to be addressed in order to achieve legal certainty for AI producers.<\/p><p>Being able to independently test and certify AI-applications is crucial not only to avoid legal issues, but also to generate trust in the applications, which in turn is necessary for widespread acceptance of AI.<\/p><p>Since many organizations worldwide are currently working on these topics, we are confident that soon it will be possible to independently certify the fairness of AI applications in a transparent and objective way. Nevertheless, the continuous development of new AI techniques will require constant adaptation of certification procedures.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b04233f elementor-widget elementor-widget-image\" data-id=\"b04233f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1536\" height=\"835\" src=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-mission-possible-1536x835.jpg\" class=\"attachment-1536x1536 size-1536x1536 wp-image-1200\" alt=\"\" srcset=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-mission-possible-1536x835.jpg 1536w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-mission-possible-552x300.jpg 552w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-mission-possible-1110x603.jpg 1110w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-mission-possible-768x417.jpg 768w, https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/03\/Whitepaper-AI-mission-possible.jpg 1800w\" sizes=\"(max-width: 1536px) 100vw, 1536px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-a71225c elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"a71225c\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-bccdcea\" data-id=\"bccdcea\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-5566769 elementor-widget elementor-widget-text-editor\" data-id=\"5566769\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<h2>Literature<\/h2><ol><li><a id=\"literatur_1\"><\/a><a href=\"https:\/\/www.iais.fraunhofer.de\/content\/dam\/iais\/fb\/Kuenstliche_intelligenz\/ki-pruefkatalog\/202107_KI-Pruefkatalog.pdf\">https:\/\/www.iais.fraunhofer.de\/content\/dam\/iais\/fb\/Kuenstliche_intelligenz\/ki-pruefkatalog\/202107_KI-Pruefkatalog.pdf<\/a> (in german)<\/li><li><a id=\"literatur_2\"><\/a>Winter, Philip Matthias, et al. \u201cTrusted Artificial Intelligence: Towards Certification of Machine Learning Applications.\u201d arXiv preprint arXiv:2103.16910 (2021).<\/li><li><a id=\"literatur_3\"><\/a><a href=\"https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=CELEX:52021PC0206\">https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=CELEX:52021PC0206<\/a><\/li><li><a id=\"literatur_4\"><\/a><a href=\"https:\/\/fra.europa.eu\/en\/publication\/2019\/data-quality-and-artificial-intelligence-mitigating-bias-and-error-protect\">https:\/\/fra.europa.eu\/en\/publication\/2019\/data-quality-and-artificial-intelligence-mitigating-bias-and-error-protect<\/a><\/li><li><a id=\"literatur_5\"><\/a><a href=\"https:\/\/fra.europa.eu\/en\/publication\/2020\/artificial-intelligence-and-fundamental-rights\">https:\/\/fra.europa.eu\/en\/publication\/2020\/artificial-intelligence-and-fundamental-rights<\/a><\/li><li><a id=\"literatur_6\"><\/a>#BigData: Discrimination in data-supported decision making <a href=\"https:\/\/fra.europa.eu\/sites\/default\/files\/fra_uploads\/fra-2018-focus-big-data_en.pdf\">https:\/\/fra.europa.eu\/sites\/default\/files\/fra_uploads\/fra-2018-focus-big-data_en.pdf<\/a><\/li><li><a id=\"literatur_7\"><\/a>Verma, Sahil, and Julia Rubin. \u201cFairness definitions explained.\u201d 2018 ieee\/acm international workshop on software fairness (fairware). IEEE, 2018<\/li><li><a id=\"literatur_8\"><\/a><a href=\"https:\/\/www.europarl.europa.eu\/charter\/pdf\/text_en.pdf\">https:\/\/www.europarl.europa.eu\/charter\/pdf\/text_en.pdf<\/a><\/li><li><a id=\"literatur_9\"><\/a>Binns, Reuben. \u201cOn the apparent conflict between individual and group fairness.\u201d Proceedings of the 2020 conference on fairness, accountability, and transparency. 2020.<\/li><li><a id=\"literatur_10\"><\/a>S. Verma and J. Rubin, \u201cFairness Definitions Explained,\u201d 2018 IEEE\/ACM International Workshop on Software Fairness (FairWare), 2018, pp. 1-7, doi: 10.23919\/FAIRWARE.2018.8452913.<\/li><li><a id=\"literatur_11\"><\/a>Bellamy, Rachel KE, et al. \u201cAI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias.\u201d arXiv preprint arXiv:1810.01943 (2018).<\/li><li><a id=\"literatur_12\"><\/a><a href=\"https:\/\/aif360.mybluemix.net\/\">https:\/\/aif360.mybluemix.net\/<\/a><\/li><li><a id=\"literatur_13\"><\/a><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/uploads\/prod\/2020\/05\/Fairlearn_WhitePaper-2020-09-22.pdf\">https:\/\/www.microsoft.com\/en-us\/research\/uploads\/prod\/2020\/05\/Fairlearn_WhitePaper-2020-09-22.pdf<\/a><\/li><li><a id=\"literatur_14\"><\/a><a href=\"https:\/\/fairlearn.org\/\">https:\/\/fairlearn.org\/<\/a><\/li><li><a id=\"literatur_15\"><\/a><a href=\"https:\/\/www.iais.fraunhofer.de\/content\/dam\/iais\/fb\/Kuenstliche_intelligenz\/ki-pruefkatalog\/202107_KI- Pruefkatalog.pdf\">https:\/\/www.iais.fraunhofer.de\/content\/dam\/iais\/fb\/Kuenstliche_intelligenz\/ki-pruefkatalog\/202107_KI- Pruefkatalog.pdf<\/a> (in german)<\/li><li><a id=\"literatur_16\"><\/a><a href=\"https:\/\/www.technologyreview.com\/2021\/02\/11\/1017955\/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain\/\">https:\/\/www.technologyreview.com\/2021\/02\/11\/1017955\/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain\/<\/a><\/li><li><a id=\"literatur_17\"><\/a>Wachter, Sandra, Brent Mittelstadt, and Chris Russell. \u201cWhy fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI.\u201d Computer Law &amp; Security Review 41 (2021): 105567.<\/li><\/ol>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>INTRODUCTION Since more and more decisions that used to be made by humans are nowadays made either with the help of Artificial Intelligence (AI) or by AI alone, it is essential that the AI algorithms are \u201cfair.\u201d In this whitepaper, we discuss issues surrounding the fairness of AI applications, with a special focus on how [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":1146,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1243","page","type-page","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Certifying Fairness of AI-Applications An Impossible Task? - Trust Your AI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Certifying Fairness of AI-Applications An Impossible Task? - Trust Your AI\" \/>\n<meta property=\"og:description\" content=\"INTRODUCTION Since more and more decisions that used to be made by humans are nowadays made either with the help of Artificial Intelligence (AI) or by AI alone, it is essential that the AI algorithms are \u201cfair.\u201d In this whitepaper, we discuss issues surrounding the fairness of AI applications, with a special focus on how [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/\" \/>\n<meta property=\"og:site_name\" content=\"Trust Your AI\" \/>\n<meta property=\"article:modified_time\" content=\"2022-03-02T09:49:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/whitepaper-cover.jpg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"16 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/trustyour.ai\\\/en\\\/whitepaper\\\/certifying-fairness-of-ai-applications-an-impossible-task\\\/\",\"url\":\"https:\\\/\\\/trustyour.ai\\\/en\\\/whitepaper\\\/certifying-fairness-of-ai-applications-an-impossible-task\\\/\",\"name\":\"Certifying Fairness of AI-Applications An Impossible Task? - Trust Your AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/trustyour.ai\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/trustyour.ai\\\/en\\\/whitepaper\\\/certifying-fairness-of-ai-applications-an-impossible-task\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/trustyour.ai\\\/en\\\/whitepaper\\\/certifying-fairness-of-ai-applications-an-impossible-task\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/trustyour.ai\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/whitepaper-cover.jpg\",\"datePublished\":\"2022-02-28T10:05:59+00:00\",\"dateModified\":\"2022-03-02T09:49:55+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/trustyour.ai\\\/en\\\/whitepaper\\\/certifying-fairness-of-ai-applications-an-impossible-task\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/trustyour.ai\\\/en\\\/whitepaper\\\/certifying-fairness-of-ai-applications-an-impossible-task\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/trustyour.ai\\\/en\\\/whitepaper\\\/certifying-fairness-of-ai-applications-an-impossible-task\\\/#primaryimage\",\"url\":\"https:\\\/\\\/trustyour.ai\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/whitepaper-cover.jpg\",\"contentUrl\":\"https:\\\/\\\/trustyour.ai\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/whitepaper-cover.jpg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/trustyour.ai\\\/en\\\/whitepaper\\\/certifying-fairness-of-ai-applications-an-impossible-task\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/trustyour.ai\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Whitepaper\",\"item\":\"https:\\\/\\\/trustyour.ai\\\/en\\\/whitepaper\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Certifying Fairness of AI-Applications An Impossible Task?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/trustyour.ai\\\/#website\",\"url\":\"https:\\\/\\\/trustyour.ai\\\/\",\"name\":\"Trust Your AI\",\"description\":\"360\u00b0 Zertifizierung f\u00fcr Ihre K\u00fcnstliche Intelligenz\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/trustyour.ai\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Certifying Fairness of AI-Applications An Impossible Task? - Trust Your AI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/","og_locale":"en_US","og_type":"article","og_title":"Certifying Fairness of AI-Applications An Impossible Task? - Trust Your AI","og_description":"INTRODUCTION Since more and more decisions that used to be made by humans are nowadays made either with the help of Artificial Intelligence (AI) or by AI alone, it is essential that the AI algorithms are \u201cfair.\u201d In this whitepaper, we discuss issues surrounding the fairness of AI applications, with a special focus on how [&hellip;]","og_url":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/","og_site_name":"Trust Your AI","article_modified_time":"2022-03-02T09:49:55+00:00","og_image":[{"url":"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/whitepaper-cover.jpg","type":"","width":"","height":""}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/","url":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/","name":"Certifying Fairness of AI-Applications An Impossible Task? - Trust Your AI","isPartOf":{"@id":"https:\/\/trustyour.ai\/#website"},"primaryImageOfPage":{"@id":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/#primaryimage"},"image":{"@id":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/#primaryimage"},"thumbnailUrl":"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/whitepaper-cover.jpg","datePublished":"2022-02-28T10:05:59+00:00","dateModified":"2022-03-02T09:49:55+00:00","breadcrumb":{"@id":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/#primaryimage","url":"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/whitepaper-cover.jpg","contentUrl":"https:\/\/trustyour.ai\/wp-content\/uploads\/2022\/02\/whitepaper-cover.jpg"},{"@type":"BreadcrumbList","@id":"https:\/\/trustyour.ai\/en\/whitepaper\/certifying-fairness-of-ai-applications-an-impossible-task\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/trustyour.ai\/"},{"@type":"ListItem","position":2,"name":"Whitepaper","item":"https:\/\/trustyour.ai\/en\/whitepaper\/"},{"@type":"ListItem","position":3,"name":"Certifying Fairness of AI-Applications An Impossible Task?"}]},{"@type":"WebSite","@id":"https:\/\/trustyour.ai\/#website","url":"https:\/\/trustyour.ai\/","name":"Trust Your AI","description":"360\u00b0 Zertifizierung f\u00fcr Ihre K\u00fcnstliche Intelligenz","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/trustyour.ai\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/trustyour.ai\/en\/wp-json\/wp\/v2\/pages\/1243","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/trustyour.ai\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/trustyour.ai\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/trustyour.ai\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/trustyour.ai\/en\/wp-json\/wp\/v2\/comments?post=1243"}],"version-history":[{"count":27,"href":"https:\/\/trustyour.ai\/en\/wp-json\/wp\/v2\/pages\/1243\/revisions"}],"predecessor-version":[{"id":1306,"href":"https:\/\/trustyour.ai\/en\/wp-json\/wp\/v2\/pages\/1243\/revisions\/1306"}],"up":[{"embeddable":true,"href":"https:\/\/trustyour.ai\/en\/wp-json\/wp\/v2\/pages\/1146"}],"wp:attachment":[{"href":"https:\/\/trustyour.ai\/en\/wp-json\/wp\/v2\/media?parent=1243"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}