跳到主要导航 跳到搜索 跳到主要内容

Faire: Repairing Fairness of Neural Networks via Neuron Condition Synthesis

  • Tianlin Li
  • , Xiaofei Xie*
  • , Jian Wang
  • , Qing Guo*
  • , Aishan Liu
  • , Lei Ma
  • , Yang Liu
  • *此作品的通讯作者
  • Nanyang Technological University
  • Singapore Management University
  • Agency for Science, Technology and Research, Singapore
  • University of Alberta
  • The University of Tokyo
  • Zhejiang Sci-Tech University

科研成果: 期刊稿件文章同行评审

摘要

Deep Neural Networks (DNNs) have achieved tremendous success in many applications, while it has been demonstrated that DNNs can exhibit some undesirable behaviors on concerns such as robustness, privacy, and other trustworthiness issues. Among them, fairness (i.e., non-discrimination) is one important property, especially when they are applied to some sensitive applications (e.g., finance and employment). However, DNNs easily learn spurious correlations between protected attributes (e.g., age, gender, race) and the classification task and develop discriminatory behaviors if the training data is imbalanced. Such discriminatory decisions in sensitive applications would introduce severe social impacts. To expose potential discrimination problems in DNNs before putting them in use, some testing techniques have been proposed to identify the discriminatory instances (i.e., instances that show defined discrimination1). However, how to repair DNNs.

源语言英语
文章编号21
期刊ACM Transactions on Software Engineering and Methodology
33
1
DOI
出版状态已出版 - 23 11月 2023

指纹

探究 'Faire: Repairing Fairness of Neural Networks via Neuron Condition Synthesis' 的科研主题。它们共同构成独一无二的指纹。

引用此