Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Machine learning (ML) systems have achieved remarkable performance across a wide area of applications. However, they frequently exhibit unfair behaviors in sensitive application domains (e.g., employment and loan), raising severe fairness concerns. To evaluate and test fairness, engineers often generate individual discriminatory instances to expose unfair behaviors before model deployment. However, existing baselines ignore the naturalness of generation and produce instances that deviate from the real data distribution, which may fail to reveal the actual model fairness since these unnatural discriminatory instances are unlikely to appear in practice. To address the problem, this paper proposes a framework named Latent Imitator (LIMI) to generate more natural individual discriminatory instances with the help of a generative adversarial network (GAN), where we imitate the decision boundary of the target model in the semantic latent space of GAN and further samples latent instances on it. Specifically, we first derive a surrogate linear boundary to coarsely approximate the decision boundary of the target model, which reflects the nature of the original data distribution. Subsequently, to obtain more natural instances, we manipulate random latent vectors to the surrogate boundary with a one-step movement, and further conduct vector calculation to probe two potential discriminatory candidates that may be more closely located in the real decision boundary. Extensive experiments on various datasets demonstrate that our LIMI outperforms other baselines largely in effectiveness (×9.42 instances), efficiency (×8.71 speeds), and naturalness (+19.65%) on average. In addition, we empirically demonstrate that retraining on test samples generated by our approach can lead to improvements in both individual fairness (45.67% on IFr and 32.81% on IFo) and group fairness (9.86% on SPD and 28.38% on AOD). Our codes can be found on our website.

Original languageEnglish
Title of host publicationISSTA 2023 - Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis
EditorsRene Just, Gordon Fraser
PublisherAssociation for Computing Machinery, Inc
Pages829-841
Number of pages13
ISBN (Electronic)9798400702211
DOIs
StatePublished - 12 Jul 2023
Event32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2023 - Seattle, United States
Duration: 17 Jul 202321 Jul 2023

Publication series

NameISSTA 2023 - Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis

Conference

Conference32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2023
Country/TerritoryUnited States
CitySeattle
Period17/07/2321/07/23

Keywords

  • Fairness Testing
  • Individual Discrimination
  • Latent Space
  • Natural Individual Discriminatory Instances

Fingerprint

Dive into the research topics of 'Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing'. Together they form a unique fingerprint.

Cite this