参考资料
1. 出自《圣经 旧约 传道书》第一章第九节
2. Explaining and harnessing adversarial examples, ICLR 2015
3. abcdefghij Intriguing properties of neural network,s ICLR 2014
4. "In vivo" spam filtering: a challenge problem for KDD, ACM SIGKDD Explorations Newsletter, 2003
5. Adversarial classification, KDD 2004
6. abcdefgEvasion attacks against machine learning at test time, ECML-PKDD 2013
7. 出自《拆东墙》许嵩
8. Attacking machine learning with adversarial examples, OpenAI Blog, 2017 https://openai.com/blog/adversarial-example-research/
9. abWith Friends Like These , Who Needs Adversaries?, NeurIPS 2018
10. Experimental Security Research of Tesla Autopilot, Tencent Keen Security Lab, 2018
11. Adversarial t-shirt! evading person detectors in a physical world, ECCV 2020
12. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. ACM SIGSAC Conference on Computer and Communications Security, 2016
13. 为什么人类能理解"盲生,你发现了华点"这种句子?- 玉瑶璃的回答 - 知乎 https://www.zhihu.com/question/292523981/answer/482834297
14. Unrestricted Adversarial Examples, arXiv, 2018
15. abcFundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations, ICML 2020
16. A Complete List of All (arXiv) Adversarial Example Papers https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html
17. Explaining and harnessing adversarial examples, ICLR 2015
18. Adversarial examples in the physical world, ICLR 2017 Workshop
19. Towards Deep Learning Models Resistant to Adversarial Attacks, ICLR 2018
20. abTowards evaluating the robustness of neural networks, S&P 2017
21. An Alternative Surrogate Loss for PGD-based Adversarial Testing, arXiv, 2019
22. Output Diversified Initialization for Adversarial Attacks, ICLR 2020 Workshop
23. NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks, ICML 2019
24. Ensemble adversarial training: Attacks and defenses, ICLR 2018
25. Universal adversarial perturbations, CVPR 2017
26. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets, ICLR 2020
27. A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks, AAAI 2020
28. Black-Box Adversarial Attack with Transferable Model-based Embedding, ICLR 2020
29. Simple Black-box Adversarial Attacks, ICML 2019
30. Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks, ICLR 2020
31. Sign-OPT: A Query-Efficient Hard-label Adversarial Attack, ICLR 2020
32. Attacks Which Do Not Kill Training Make Adversarial Learning Stronger, ICML 2020
33. CAT: Customized Adversarial Training for Improved Robustness, arXiv, 2020
34. Improving Adversarial Robustness Through Progressive Hardening, arXiv, 2020
35. A Rotation and a Translation Suffice : Fooling CNNs with Simple Transformations, arXiv, 2018
36. Wasserstein Adversarial Examples via Projected Sinkhorn Iterations, ICML 2019
37. Trace-Norm Adversarial Examples, arXiv, 2020
38. Functional adversarial attacks, NeurIPS 2019
39. Spatially Transformed Adversarial Examples, ICLR 2018
40. ADef: an Iterative Algorithm to Construct Adversarial Deformations, ICLR 2019
41. Generating realistic unrestricted adversarial inputs using dual-objective gan training, arXiv, 2019
42. Unrestricted Adversarial Examples via Semantic Manipulation, ICLR 2020
43. Constructing Unrestricted Adversarial Examples with Generative Models, NeurIPS 2018
44. Generating adversarial examples with adversarial networks, arXiv, 2018
45. Achieving robustness in the wild via adversarial mixing with disentangled representations, CVPR 2020
46. Learning perturbation sets for robust machine learning, arXiv, 2020
47. Perceptual Adversarial Robustness: Defense Against Unseen Threat Models, arXiv, 2020
48.https://twitter.com/goodfellow_ian/status/1220769722637021185
49. Feature Denoising for Improving Adversarial Robustness, CVPR 2019
50. Adversarial Training for Free!, NeurIPS 2019
51. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle, NeurIPS 2019
52. Fast is better than free: Revisiting adversarial training, ICLR 2020
53. Efficient Adversarial Training with Transferable Adversarial Examples, CVPR 2020
54. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples, ICML 2018
55. Resisting adversarial attacks by k-winners-take-all, ICLR 2020
56. On Adaptive Attacks to Adversarial Example Defenses, arXiv, 2020
57. Adversarial examples are not easily detected: Bypassing ten detection methods, ACM Workshop on Artificial Intelligence and Security, 2017
58. A New Defense Against Adversarial Images : Turning a Weakness into a Strength, NeurIPS 2019
59. Robustness May Be at Odds with Accuracy, ICLR 2019
60. Adversarial Examples Are Not Bugs, They Are Features, NeurIPS 2019
61. A Fourier Perspective on Model Robustness in Computer Vision, NeurIPS 2019
62. Who is in Control?. The Brain with David Eagleman, 2015 https://www.bilibili.com/bangumi/play/ss27189?t=2419
63. Adversarial Examples that Fool both Computer Vision and Time-Limited Humans, NeurIPS 2018
64. Robustness May Be at Odds with Accuracy, ICLR 2019
65. Interpreting Adversarially Trained Convolutional Neural Networks, ICML 2019
66. Does Interpretability of Neural Networks Imply Adversarial Robustness? arXiv, 2019
67. Computer Vision with a Single (Robust) Classifier, NeurIPS 2019
68. Learning Perceptually-Aligned Representations via Adversarial Robustness, arXiv, 2019
69. Adversarial Robustness as a Prior for Learned Representations, arXiv, 2019
70. DROCC: Deep Robust One-Class Classification, ICML 2020
71. ARAE: Adversarially Robust Training of Autoencoders Improves Novelty Detection, arXiv, 2020
72. Do Adversarially Robust ImageNet Models Transfer Better?, arXiv, 2020
73. Adversarially-Trained Deep Nets Transfer Better, arXiv, 2020
74. Adversarial Training Reduces Information and Improves Transferability, arXiv, 2020
75. Measuring Robustness to Natural Distribution Shifts in Image Classification, arXiv, 2020