The increasing sophistication of cyber threats and the growing integration of artificial intelligence (AI) into digital systems require approaches that both leverage AI for cybersecurity and ensure the security of AI systems themselves. This research area therefore addresses AI for cybersecurity – the use of AI techniques to enhance threat detection, incident response, and security operations – and cybersecurity for AI, which focuses on protecting AI models, data, and infrastructures from adversarial attacks, manipulation, and misuse. The programme aims to equip young researchers with the knowledge and skills needed to design trustworthy, resilient, and human-centered AI-enabled cybersecurity solutions.
Research will focus on the development of AI-driven cybersecurity technologies and the protection of AI-based systems, including:
- Trustworthy and explainable AI for security: Developing explainable AI (XAI) methods that improve transparency, interpretability, and trust in automated security decisions.
- Human–AI collaboration in cybersecurity operations: Designing adaptive AI agents that support security analysts and improve decision-making in security operation centers (SOCs).
- AI-driven threat detection and incident response: Applying machine learning to automate the identification, analysis, and mitigation of cyber threats while maintaining human oversight.
- Security and robustness of AI systems: Investigating methods to defend AI models against adversarial attacks, data poisoning, model theft, and other threats targeting AI infrastructures.
- Cybersecurity for critical infrastructures: Developing AI-enabled security architectures for industrial control systems, digital twins, and IoT environments.
- Regulatory and ethical compliance: Ensuring alignment of AI-driven cybersecurity solutions with relevant regulatory frameworks (e.g., GDPR, NIS2, Cyber Resilience Act, EU AI Act).
Candidates will adopt a multidisciplinary approach combining cybersecurity, artificial intelligence, human–computer interaction, and systems engineering. Key research methods may include AI-driven security policy generation, human-factor analysis in SOC automation, federated and privacy-preserving learning for collaborative threat intelligence, and simulation-based evaluation using cyber ranges, digital twins, and penetration testing environments.
Through this research, young researchers will contribute to next-generation cybersecurity frameworks that strengthen both the use of AI for cyber defence and the protection of AI systems, supporting more resilient, trustworthy, and secure digital infrastructures.