Generative Adversarial AI Techniques Applied to Intrusion Detection (M03a)
Recent studies have shown the vulnerability of AI-based systems to adversarial AI techniques commonly known as known as i) poisoning, ii) backdoor, iii) oracle and iv) evasion attacks. However, compared to computer vision or natural language processing, the state of the research in adversarial AI applications to cybersecurity is critically incomplete. This is paradoxical if we think of cybersecurity components as the first line of defence of cognitive computing systems. In particular Intrusion Detection Systems (IDS) increasingly involve AI in approaches known as i) misuse detection and ii) anomaly detection. Both approaches can benefit from Machine Learning (ML) with, respectively, supervised and unsupervised ML techniques. In this presentation we describe relevant AI threat scenarios applicable to ML-based IDS at different steps of the ML security lifecycle. We also propose preventive and curative measures to counter these threats. In particular we describe the potential of Generative Adversarial Networks (GAN) to improve ML-based IDS robustness against evasion attacks. Finally, we propose generative adversarial AI techniques enabling robust explainable intrusion detection services for critical businesses.