Detect attacked images, create adversarial examples, design defenses.
What our Project Does .
"Our project demonstrates the lifecycle of adversarial attacks on AI systems. It includes three key functionalities".
Generate Adversarial Attacks: We create adversarial images that exploit vulnerabilities in AI models, causing them to make incorrect predictions. .
Detect Adversarial Images: We identify whether an image has been manipulated with adversarial noise to ensure the integrity of AI predictions. .
Defend AI Models: We apply preprocessing techniques to mitigate the impact of adversarial attacks, improving the robustness of AI systems."