Dr. Tanja Pavleska

Tanja Pavleska is a researcher at the Laboratory for Open Systems and Networks, Jozef Stefan Institute. She obtained her PhD from the JSI Postgraduate School in the area of Computational trust and reputation systems, with emphasis on user behaviour. Her interests include: Cybersecurity for AI and AI for cybersecurity, trustworthy and explainable AI, critical infrastructures security, industrial automation and digital twins, threat Intelligence and digital forensics, architecture design, digital policies and regulatory frameworks. She has participated in many international projects, coordinating and researching the latest technological advancements.

Research programme: Future internet technologies: concepts, architectures, services and socio-economic issues
Training topic: Human-centered design of autonomous and AI-driven solutions for cybersecurity

The increasing complexity of cybersecurity threats necessitates the development of autonomous and AI-driven solutions capable of enhancing security operations, threat intelligence, and incident management. However, traditional security automation often lacks transparency, adaptability, and alignment with human decision-making processes. This program aims to equip candidates with the knowledge and skills needed to develop human-centered AI-driven cybersecurity solutions that integrate automation with explainability, trustworthiness, and human oversight. Their research will focus on designing AI-driven security systems that:

  • Enhance trust and explainability: Develop models for explainable AI (XAI) to improve transparency and user confidence in autonomous security decisions.
  • Enable human-AI collaboration: Implement adaptive AI agents that assist security analysts, enabling decision support and reducing cognitive workload in security operation centers (SOC).
  • Improve threat intelligence and incident response: Automate the analysis of cybersecurity threats while ensuring human validation of critical decisions.
  • Optimize cybersecurity for critical infrastructures and organizations: Develop AI-driven security architectures tailored to industrial control systems, digital twins, and IoT environments.
  • Ensure compliance with regulatory frameworks: Align AI security policies with relevant regulations (GDPR, NIS2, Cyber-resilience Act, EU AI Act) to meet legal and ethical requirements.

Through their research, candidates will employ a multi-disciplinary approach, combining cybersecurity, AI, human-computer interaction, system design, and regulatory compliance frameworks. Some of the key methods will include:

  • AI-driven security policy generation: Developing AI models that dynamically adapt security policies based on evolving threats and organizational needs.
  • Cognitive and affective human factors in SOC’s automation: Integrating behavioral analytics and affective computing to model security analysts’ responses and optimize AI-driven SOC assistance.
  • Federated and privacy-preserving learning: Applying decentralized AI techniques to protect sensitive cybersecurity data while improving threat detection capabilities.
  • Simulation and evaluation: Testing AI-driven security models in realistic environments (SOC simulations, digital twins, penetration testing) to measure effectiveness and usability.

With this, candidates will contribute to next-generation cybersecurity frameworks that balance automation and human oversight, and improve the efficiency, resilience, and trustworthiness of AI-driven security solutions. Their research will directly support critical infrastructures, SOCs, and regulatory bodies in adopting AI-driven cybersecurity strategies while ensuring ethical AI deployment.