Our tools are based on established Open-Source toolkits. We apply state-of-the-art software developed by leading research institutions and by optimizing these tools we ensure a perfect user experience and simple operation.
We analyze both training data and predictions by AI systems in terms of possible discrimination, unfair imbalances and biases. You will receive a comprehensive report and easy-to-understand explanation of the results.
Machine learning models can be vulnerable to so-called “adversarial attacks,” during which very small (often imperceptible) perturbations to the inputs cause the model to make an incorrect prediction. We use the “Adversarial Robustness Toolbox” developed by IBM to simulate such attacks and test the robustness of the AI system.
To date, no established tools exist that examine whether training data contains personal data covered by the General Data Protection Regulation (GDPR). Our in-house-developed tools automatically scan and tag data that may be personal.
Cybercrime is a major challenge when assessing AI systems. Conventional static testing is not sufficient. We are exploring fundamentally new safety engineering concepts to obtain continuous attestation of AI systems’ resilience against cyberattacks.