Enhancing New Intent Discovery via Robust Neighbor-based Contrastive Learning

  • Zhenhe Wu
  • , Xiaoguang Yu
  • , Meng Chen*
  • , Liangqing Wu
  • , Jiahao Ji
  • , Zhoujun Li*
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

New intent discovery (NID) has become a hot topic for dialogue system, which aims to discover the Out-Of-Domain intents from conversation corpus and classify these utterances correctly. Existing methods usually focus on learning compact representations of utterances, and leverage the clustering algorithm to generate new intents. Inspired by the recent progress of contrastive learning, in this work, we propose a novel neighbor-based contrastive learning (NCL) model to obtain clustering-friendly representations for utterances. Specifically, to enhance the robustness of NCL, on the one hand, we pick out diverse samples as positive pairs by considering both the anchor neighborhood and nearby neighborhood. On the other hand, we also devise a boundary distance constraint to avoid introducing noisy samples when extending the positives via neighbors. Extensive experiments are conducted on three public NID datasets and the results demonstrate the competitiveness and effectiveness of our proposed approach.

Original languageEnglish
Pages (from-to)740-744
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2023-August
DOIs
StatePublished - 2023
Event24th Annual conference of the International Speech Communication Association, Interspeech 2023 - Dublin, Ireland
Duration: 20 Aug 202324 Aug 2023

Keywords

  • clustering
  • contrastive learning
  • new intent discovery

Fingerprint

Dive into the research topics of 'Enhancing New Intent Discovery via Robust Neighbor-based Contrastive Learning'. Together they form a unique fingerprint.

Cite this