- 2023-08-30【五期邹昱夫】CCF-A(TIFS'23)SAFELearning: Secure Aggregation in Federated Learning with Backdoor Detec
"Zhang,Zhuosheng,etal."SAFELearning:SecureAggregationinFederatedLearningwithBackdoorDetectability."IEEETransactionsonInformationForensicsandSecurity(2023)." 本文提出了一种在联邦学习场景下可以保护隐私并防御后门攻击的聚合方法。作者认
- 2023-06-27【五期邹昱夫】CCF-B(IEEE Access'19)Badnets: Evaluating backdooring attacks on deep neural networks
"Gu,Tianyu,etal."Badnets:Evaluatingbackdooringattacksondeepneuralnetworks."IEEEAccess7(2019):47230-47244." 本文提出了外包机器学习时选择值得信赖的提供商的重要性,以及确保神经网络模型安全地托管和从在线存储库下载的重要性。并展示了迁移学习场
- 2023-06-27【五期邹昱夫】CCF-B(RAID'18)Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Network
"Liu,Kang,BrendanDolan-Gavitt,andSiddharthGarg."Fine-pruning:Defendingagainstbackdooringattacksondeepneuralnetworks."ResearchinAttacks,Intrusions,andDefenses:21stInternationalSymposium,RAID2018,Heraklion,Crete,
- 2023-05-08【五期邹昱夫】CCF-A(NeurIPS'19)Inverting gradients-how easy is it to break privacy in federated learni
"GeipingJ,BauermeisterH,DrögeH,etal.Invertinggradients-howeasyisittobreakprivacyinfederatedlearning?[J].AdvancesinNeuralInformationProcessingSystems,2020,33:16937-16947." 本文发现梯度的方向比其范数幅值携带了更加重要的信息,以
- 2023-04-19【五期邹昱夫】arXiv(22)iDLG: Improved Deep Leakage from Gradients
"ZhaoB,MopuriKR,BilenH.idlg:Improveddeepleakagefromgradients[J].arXivpreprintarXiv:2001.02610,2020." 本文发现共享梯度肯定会泄露数据真实标签。我们提出了一种简单但可靠的方法来从梯度中提取准确的数据。与DLG相比,可以提取基本真实标签,因此将其
- 2023-04-18【五期邹昱夫】CCF-A(NeurIPS'19)Deep leakage from gradients.
"Zhu,Ligeng,ZhijianLiu,andSongHan."Deepleakagefromgradients."Advancesinneuralinformationprocessingsystems32(2019)." 本文从公开共享的梯度中获得私有训练数据。首先随机生成一对“伪”输入和标签,然后执行正常的向前和向后操作。在从伪数据导出
- 2023-02-24【五期邹昱夫】CCF-A(SIGSAC'22)Membership Inference Attacks by Exploiting Loss Trajectory
"Liu,Yiyong,etal."Membershipinferenceattacksbyexploitinglosstrajectory."Proceedingsofthe2022ACMSIGSACConferenceonComputerandCommunicatio
- 2023-02-23【五期邹昱夫】CCF-A(ICCV'21)On the Difficulty of Membership Inference Attacks
"Rezaei,Shahbaz,andXinLiu."Onthedifficultyofmembershipinferenceattacks."ProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRec
- 2023-02-11【五期邹昱夫】CCF-A(IEEE'19)Adversary instantiation: Lower bounds for differentially private machine l
"NasrM,SongiS,ThakurtaA,etal.Adversaryinstantiation:Lowerboundsfordifferentiallyprivatemachinelearning[C]//2021IEEESymposiumonsecurityan
- 2023-02-10【五期邹昱夫】CCF-A(USENIX'19)Evaluating Differentially Private Machine Learning in Practice
"JayaramanB,EvansD.Evaluatingdifferentiallyprivatemachinelearninginpractice[C]//USENIXSecuritySymposium.2019." 本文对机器学习不同隐私机制
- 2023-02-03【五期邹昱夫】CCF-A(IEEE'22)Dynamic Backdoor Attacks Against Machine Learning Models
"SalemA,WenR,BackesM,etal.Dynamicbackdoorattacksagainstmachinelearningmodels[C]//2022IEEE7thEuropeanSymposiumonSecurityandPrivacy(Euro
- 2023-02-03【五期邹昱夫】CCF-A(KDD '19)Auditing data provenance in text-generation models.
"SongC,ShmatikovV.Auditingdataprovenanceintext-generationmodels[C]//Proceedingsofthe25thACMSIGKDDInternationalConferenceonKnowledgeDiscove
- 2022-12-16【五期邹昱夫】EMNLP(EMNLP'17)Tensor Fusion Network for Multimodal Sentiment Analysis
EMNLP2017. 本文认为多模态情感分析存在两个挑战。模态间的动态变化:声音、文本、图像之间会相互作用相互影响。模态内的动态变化:对口语化的文本文件进行情感分析非