Robustness
Machine learning models can be vulnerable to so-called “adversarial attacks,” during which very small (often imperceptible) perturbations to the inputs cause the model to make an incorrect prediction. We use the “Adversarial Robustness Toolbox” developed by IBM to simulate such attacks and test the robustness of the AI system.