Downloads: 3 | Views: 60 | Weekly Hits: ⮙1 | Monthly Hits: ⮙3
Analysis Study Research Paper | Neural Networks | India | Volume 13 Issue 9, September 2024 | Popularity: 4.8 / 10
Theoretical Analysis and Review of Adversarial Robustness in Deep Learning
Karthick Kumaran Ayyalluseshagiri Viswanathan
Abstract: Deep learning and neural networks are widely used in many recognition tasks including safety critical applications like self-driving cars, medical image analysis, robotics, etc., and have shown significant potential in various computer vision applications. The performance and accuracy of the deep learning models is highly important in safety critical systems. Recently some researchers have disclosed that deep neural networks are vulnerable to adversarial attacks. This paper talks about the adversarial examples, analyzes how adversarial noise can affect the performance and accuracy of deep learning models, potential mitigation strategies and the uncertainties in the deep learning models.
Keywords: Deep learning, Adversarial Example, Adversarial Attack, Uncertainty, Bayesian Inference
Edition: Volume 13 Issue 9, September 2024
Pages: 1441 - 1443
Make Sure to Disable the Pop-Up Blocker of Web Browser