TY - JOUR
T1 - Faire
T2 - Repairing Fairness of Neural Networks via Neuron Condition Synthesis
AU - Li, Tianlin
AU - Xie, Xiaofei
AU - Wang, Jian
AU - Guo, Qing
AU - Liu, Aishan
AU - Ma, Lei
AU - Liu, Yang
N1 - Publisher Copyright:
© 2023 Copyright held by the owner/author(s).
PY - 2023/11/23
Y1 - 2023/11/23
N2 - Deep Neural Networks (DNNs) have achieved tremendous success in many applications, while it has been demonstrated that DNNs can exhibit some undesirable behaviors on concerns such as robustness, privacy, and other trustworthiness issues. Among them, fairness (i.e., non-discrimination) is one important property, especially when they are applied to some sensitive applications (e.g., finance and employment). However, DNNs easily learn spurious correlations between protected attributes (e.g., age, gender, race) and the classification task and develop discriminatory behaviors if the training data is imbalanced. Such discriminatory decisions in sensitive applications would introduce severe social impacts. To expose potential discrimination problems in DNNs before putting them in use, some testing techniques have been proposed to identify the discriminatory instances (i.e., instances that show defined discrimination1). However, how to repair DNNs.
AB - Deep Neural Networks (DNNs) have achieved tremendous success in many applications, while it has been demonstrated that DNNs can exhibit some undesirable behaviors on concerns such as robustness, privacy, and other trustworthiness issues. Among them, fairness (i.e., non-discrimination) is one important property, especially when they are applied to some sensitive applications (e.g., finance and employment). However, DNNs easily learn spurious correlations between protected attributes (e.g., age, gender, race) and the classification task and develop discriminatory behaviors if the training data is imbalanced. Such discriminatory decisions in sensitive applications would introduce severe social impacts. To expose potential discrimination problems in DNNs before putting them in use, some testing techniques have been proposed to identify the discriminatory instances (i.e., instances that show defined discrimination1). However, how to repair DNNs.
KW - Deep learning repair
KW - fairness
KW - individual discrimination
KW - model interpretation
UR - https://www.scopus.com/pages/publications/85183695933
U2 - 10.1145/3617168
DO - 10.1145/3617168
M3 - 文章
AN - SCOPUS:85183695933
SN - 1049-331X
VL - 33
JO - ACM Transactions on Software Engineering and Methodology
JF - ACM Transactions on Software Engineering and Methodology
IS - 1
M1 - 21
ER -