Abstract: Interpretability of DNN-based object detection has been a rising concern for the research community. The first step towards this goal is a saliency map that visualizes the importance ...
Abstract: Adversarial examples (AEs) are typical model evasion attacks and security threats in deep neural networks (DNNs). One of the countermeasures is adversarial training (AT), and it trains DNNs ...