联邦后门攻击

联邦学习与后门攻击结合的论文

Federated Learning

  • FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning.
    [pdf]
    [code]

    • Kaiyuan Zhang, Guanhong Tao, Qiuling Xu, Siyuan Cheng, Shengwei An, Yingqi Liu, Shiwei Feng, Guangyu Shen, Pin-Yu Chen, Shiqing Ma, and Xiangyu Zhang. ICLR, 2023.
  • On the Vulnerability of Backdoor Defenses for Federated Learning.
    [link]
    [code]

    • Pei Fang and Jinghui Chen. AAAI, 2023.
  • Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning.
    [link]

    • Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Bin Wang, Jiqiang Liu, and Xiangliang Zhang. AAAI, 2023.
  • Neurotoxin: Durable Backdoors in Federated Learning.
    [pdf]

    • Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael W. Mahoney, Joseph E. Gonzalez, Kannan Ramchandran, and Prateek Mittal. ICML, 2022.
  • FLAME: Taming Backdoors in Federated Learning.
    [pdf]

    • Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, and Thomas Schneider. USENIX Security, 2022.
  • DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection.
    [pdf]

    • Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, and Ahmad-Reza Sadeghi. NDSS, 2022.
  • Defending Label Inference and Backdoor Attacks in Vertical Federated Learning.
    [pdf]

    • Yang Liu, Zhihao Yi, Yan Kang, Yuanqin He, Wenhan Liu, Tianyuan Zou, and Qiang Yang. AAAI, 2022.
  • An Analysis of Byzantine-Tolerant Aggregation Mechanisms on Model Poisoning in Federated Learning.
    [link]

    • Mary Roszel, Robert Norvill, and Radu State. MDAI, 2022.
  • Against Backdoor Attacks In Federated Learning With Differential Privacy.
    [link]

    • Lu Miao, Wei Yang, Rong Hu, Lu Li, and Liusheng Huang. ICASSP, 2022.
  • Secure Partial Aggregation: Making Federated Learning More Robust for Industry 4.0 Applications.
    [link]

    • Jiqiang Gao, Baolei Zhang, Xiaojie Guo, Thar Baker, Min Li, and Zheli Liu. IEEE Transactions on Industrial Informatics, 2022.
  • Backdoor Attacks-resilient Aggregation based on Robust Filtering of Outliers in Federated Learning for Image Classification.
    [link]

    • Nuria Rodríguez-Barroso, Eugenio Martínez-Cámara, M. Victoria Luzónb, and Francisco Herrera. Knowledge-Based Systems, 2022.
  • Defense against Backdoor Attack in Federated Learning.
    [link]
    [code]

    • Shiwei Lu, Ruihu Li, Wenbin Liu, and Xuan Chen. Computers & Security, 2022.
  • Privacy-Enhanced Federated Learning against Poisoning Adversaries.
    [link]

    • Xiaoyuan Liu, Hongwei Li, Guowen Xu, Zongqi Chen, Xiaoming Huang, and Rongxing Lu. IEEE Transactions on Information Forensics and Security, 2021.
  • Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers.
    [link]

    • Xueluan Gong, Yanjiao Chen, Huayang Huang, Yuqing Liao, Shuai Wang, and Qian Wang. IEEE Network, 2022.
  • CRFL: Certifiably Robust Federated Learning against Backdoor Attacks.
    [pdf]

    • Chulin Xie, Minghao Chen, Pin-Yu Chen, and Bo Li. ICML, 2021.
  • Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning.
    [pdf]

    • Syed Zawad, Ahsan Ali, Pin-Yu Chen, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Yuan Tian, and Feng Yan. AAAI, 2021.
  • Defending Against Backdoors in Federated Learning with Robust Learning Rate.
    [pdf]

    • Mustafa Safa Ozdayi, Murat Kantarcioglu, and Yulia R. Gel. AAAI, 2021.
  • BaFFLe: Backdoor detection via Feedback-based Federated Learning.
    [pdf]

    • ebastien Andreina, Giorgia Azzurra Marson, Helen Möllering, and Ghassan Karame. ICDCS, 2021.
  • PipAttack: Poisoning Federated Recommender Systems for Manipulating Item Promotion.
    [pdf]

    • Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Quoc Viet Hung Nguyen, and Lizhen Cui. WSDM, 2021.
  • Mitigating the Backdoor Attack by Federated Filters for Industrial IoT Applications.
    [link]

    • Boyu Hou, Jiqiang Gao, Xiaojie Guo, Thar Baker, Ying Zhang, Yanlong Wen, and Zheli Liu. IEEE Transactions on Industrial Informatics, 2021.
  • Stability-Based Analysis and Defense against Backdoor Attacks on Edge Computing Services.
    [link]

    • Yi Zhao, Ke Xu, Haiyang Wang, Bo Li, and Ruoxi Jia. IEEE Network, 2021.
  • Attack of the Tails: Yes, You Really Can Backdoor Federated Learning.
    [pdf]

    • Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. NeurIPS, 2020.
  • DBA: Distributed Backdoor Attacks against Federated Learning.
    [pdf]

    • Chulin Xie, Keli Huang, Pinyu Chen, and Bo Li. ICLR, 2020.
  • The Limitations of Federated Learning in Sybil Settings.
    [pdf]
    [extension]
    [code]

    • Clement Fung, Chris J.M. Yoon, and Ivan Beschastnikh. RAID, 2020 (arXiv, 2018).
  • How to Backdoor Federated Learning.
    [pdf]

    • Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. AISTATS, 2020 (arXiv, 2018).
  • BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine Learning.
    [pdf]

    • Arup Mondal, Harpreet Virk, and Debayan Gupta. AAAI Workshop, 2022.
  • Backdoor Attacks and Defenses in Feature-partitioned Collaborative Learning.
    [pdf]

    • Yang Liu, Zhihao Yi, and Tianjian Chen. ICML Workshop, 2020.
  • Can You Really Backdoor Federated Learning?
    [pdf]

    • Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H. Brendan McMahan. NeurIPS Workshop, 2019.
  • Invariant Aggregator for Defending Federated Backdoor Attacks.
    [pdf]

    • Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, and Shruti Tople. arXiv, 2022.
  • Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints.
    [pdf]

    • Minghui Li, Wei Wan, Jianrong Lu, Shengshan Hu, Junyu Shi, and Leo Yu Zhang. arXiv, 2022.
  • Federated Zero-Shot Learning for Visual Recognition.
    [pdf]

    • Zhi Chen, Yadan Luo, Sen Wang, Jingjing Li, and Zi Huang. arXiv, 2022.
  • Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment.
    [pdf]

    • Tian Liu, Xueyang Hu, and Tao Shu. arXiv, 2022.
  • FL-Defender: Combating Targeted Attacks in Federated Learning.
    [pdf]

    • Najeeb Jebreel and Josep Domingo-Ferrer. arXiv, 2022.
  • Backdoor Attack is A Devil in Federated GAN-based Medical Image Synthesis.
    [pdf]

    • Ruinan Jin and Xiaoxiao Li. arXiv, 2022.
  • SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning.
    [pdf]
    [code]

    • Harsh Chaudhari, Matthew Jagielski, and Alina Oprea. arXiv, 2022.
  • PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations.
    [pdf]
    [code]

    • Manaar Alam, Esha Sarkar, and Michail Maniatakos. arXiv, 2022.
  • Towards a Defense against Backdoor Attacks in Continual Federated Learning.
    [pdf]

    • Shuaiqi Wang, Jonathan Hayase, Giulia Fanti, and Sewoong Oh. arXiv, 2022.
  • Client-Wise Targeted Backdoor in Federated Learning.
    [pdf]

    • Gorka Abad, Servio Paguada, Stjepan Picek, Víctor Julio Ramírez-Durán, and Aitor Urbieta. arXiv, 2022.
  • Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection.
    [pdf]

    • Yein Kim, Huili Chen, and Farinaz Koushanfar. arXiv, 2022.
  • ARIBA: Towards Accurate and Robust Identification of Backdoor Attacks in Federated Learning.
    [pdf]

    • Yuxi Mi, Jihong Guan, and Shuigeng Zhou. arXiv, 2022.
  • More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks.
    [pdf]

    • Jing Xu, Rui Wang, Kaitai Liang, and Stjepan Picek. arXiv, 2022.
  • Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks.
    [pdf]

    • Siddhartha Datta and Nigel Shadbolt. arXiv, 2022.
  • Backdoors Stuck at The Frontdoor: Multi-Agent Backdoor Attacks That Backfire.
    [pdf]

    • Siddhartha Datta and Nigel Shadbolt. arXiv, 2022.
  • Federated Unlearning with Knowledge Distillation.
    [pdf]

    • Chen Wu, Sencun Zhu, and Prasenjit Mitra. arXiv, 2022.
  • Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning.
    [pdf]

    • Phung Lai, NhatHai Phan, Abdallah Khreishah, Issa Khalil, and Xintao Wu. arXiv, 2022.
  • Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis.
    [pdf]

    • Zihang Zou, Boqing Gong, and Liqiang Wang. arXiv, 2021.
  • On Provable Backdoor Defense in Collaborative Learning.
    [pdf]

    • Ximing Qiao, Yuhua Bai, Siping Hu, Ang Li, Yiran Chen, and Hai Li. arXiv, 2021.
  • SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification.
    [pdf]

    • Ashwinee Panda, Saeed Mahloujifar, Arjun N. Bhagoji, Supriyo Chakraborty, and Prateek Mittal. arXiv, 2021.
  • Robust Federated Learning with Attack-Adaptive Aggregation.
    [pdf]
    [code]

    • Ching Pui Wan, and Qifeng Chen. arXiv, 2021.
  • Meta Federated Learning.
    [pdf]

    • Omid Aramoon, Pin-Yu Chen, Gang Qu, and Yuan Tian. arXiv, 2021.
  • FLGUARD: Secure and Private Federated Learning.
    [pdf]

    • Thien Duc Nguyen, Phillip Rieger, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Ahmad-Reza Sadeghi, Thomas Schneider, and Shaza Zeitouni. arXiv, 2021.
  • Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy.
    [pdf]

    • Mohammad Naseri, Jamie Hayes, and Emiliano De Cristofaro. arXiv, 2020.
  • Backdoor Attacks on Federated Meta-Learning.
    [pdf]

    • Chien-Lun Chen, Leana Golubchik, and Marco Paolieri. arXiv, 2020.
  • Dynamic backdoor attacks against federated learning.
    [pdf]

    • Anbu Huang. arXiv, 2020.
  • Federated Learning in Adversarial Settings.
    [pdf]

    • Raouf Kerkouche, Gergely Ács, and Claude Castelluccia. arXiv, 2020.
  • BlockFLA: Accountable Federated Learning via Hybrid Blockchain Architecture.
    [pdf]

    • Harsh Bimal Desai, Mustafa Safa Ozdayi, and Murat Kantarcioglu. arXiv, 2020.
  • Mitigating Backdoor Attacks in Federated Learning.
    [pdf]

    • Chen Wu, Xian Yang, Sencun Zhu, and Prasenjit Mitra. arXiv, 2020.
  • Learning to Detect Malicious Clients for Robust Federated Learning.
    [pdf]

    • Suyi Li, Yong Cheng, Wei Wang, Yang Liu, and Tianjian Chen. arXiv, 2020.
  • Attack-Resistant Federated Learning with Residual-based Reweighting.
    [pdf]
    [code]

    • Shuhao Fu, Chulin Xie, Bo Li, and Qifeng Chen. arXiv, 2019.

联邦后门攻击
https://miku2024.top/posts/联邦后门攻击/
作者
KB
发布于
2024年6月29日
许可协议