[Thesis defence] 16/12/2025 – Grace Tessa Masse: «Cyber deception and resilience with distributed machine learning» (UPR LIA)
Ms Grace TESSA MASSE will publicly defend her thesis entitled: «Cyber deception and resilience with distributed machine learning» supervised by Mr Abderrahim BENSLIMANE and Mr Vianney KENGNE TCHENDJI, in joint supervision with the University of Dschang (Cameroon), on Tuesday 16 December 2025.
Date and place
Oral defense scheduled on Tuesday, 16 December 2025 at 2pm
Location: 74 Rue Louis Pasteur, 84029 Avignon, Avignon University – Hannah Arendt Campus
Room: Lecture Theatre 2E07
Discipline
Computer Science
Laboratory
UPR 4128 LIA - Avignon Computing Laboratory
Composition of the jury
| Mr Abderrahim BENSLIMANE | Avignon University | Thesis supervisor |
| Mr Rémi BADONNEL | TELECOM Nancy – University of Lorraine | Rapporteur |
| Mr Lyes KHOUKHI | CNAM Paris University | Rapporteur |
| Mr Yezekael Hayel | University of Avignon | Examiner |
| Mr Vianney KENGNE TCHENDJI | University of Dschang | Thesis co-director |
| Mr Ahmed HEMIDA ANWAR | US Army DEVCOM ARL | Examiner |
| Ms Tooska Darga | Manchester Metropolitan University | Examiner |
Summary
Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has developed very rapidly. This has caused a real technological revolution in several areas. However, this evolution also makes system security and personal data protection much more difficult. Training high-performance machine learning models typically requires the aggregation and analysis of a substantial amount of user data in a centralised location. This process, which requires significant processing resources and high bandwidth, is further limited by strict legislative restrictions on user data privacy. To address these challenges, federated learning (FL) has emerged as a promising alternative. It allows different groups, such as devices or companies, to work together to train a model without having to send their raw data outside their own devices. This method protects privacy, reduces the risk of data leaks, and facilitates regulatory compliance. However, it uses a decentralised architecture that reduces trust between all parties participating in the FL system. This situation facilitates sophisticated attacks designed to disrupt the learning process and weaken the reliability of the model. These attacks can have a significant impact on model performance, including a significant drop in accuracy or intentional and malicious alterations to predictions. This thesis aims to address the major security challenge of mitigating such attacks. Several reactive, passive, and mitigation-focused defence solutions have been proposed. However, these approaches often fail to take into account the information imbalance between attackers and defenders of the system. This idea opens up new avenues of research to address concerns related to FL attacks. This thesis presents an innovative and proactive offensive defence strategy based on the concept of cyber deception. The goal is to go beyond the conventional defensive paradigm by implementing systems that can deceive attackers, actively thwart them, reduce their persistence, and push them to make mistakes or waste their resources, thereby rendering their attacks ineffective in the long run. The proposed technique comprises two fundamental steps. First, we developed a robust model to detect malicious clients in the FL framework. This detection strategy is based on analysing model updates provided by clients and incorporates advanced approaches to assess client reliability. Next, we proposed two extit{deception systems} that exploit the observed bad updates to lure attackers into simulated environments. We design a global decoy model based on generative adversarial networks (GANs) and clustering approaches. This decoy model faithfully replicates the behaviour of the real model, misleading the attacker about the effectiveness of their attack; however, in reality, it has virtually no impact on the main system. To evaluate the quality and effectiveness of these decoys, we defined three analytical criteria: indistinguishability, credibility, and viability. Experiments conducted in different scenarios demonstrate that our system effectively identifies poisoning attacks and significantly reduces their impact, in particular by actively repelling adversaries. This thesis highlights the effectiveness of cyber deception in strengthening the security of FL and, more broadly, that of distributed AI systems in terms of cybersecurity.
Keywords : distributed machine learning, cyber deception, cyber resilience

Updated on 8 December 2025