Abstract
New intent discovery (NID) has become a hot topic for dialogue system, which aims to discover the Out-Of-Domain intents from conversation corpus and classify these utterances correctly. Existing methods usually focus on learning compact representations of utterances, and leverage the clustering algorithm to generate new intents. Inspired by the recent progress of contrastive learning, in this work, we propose a novel neighbor-based contrastive learning (NCL) model to obtain clustering-friendly representations for utterances. Specifically, to enhance the robustness of NCL, on the one hand, we pick out diverse samples as positive pairs by considering both the anchor neighborhood and nearby neighborhood. On the other hand, we also devise a boundary distance constraint to avoid introducing noisy samples when extending the positives via neighbors. Extensive experiments are conducted on three public NID datasets and the results demonstrate the competitiveness and effectiveness of our proposed approach.
| Original language | English |
|---|---|
| Pages (from-to) | 740-744 |
| Number of pages | 5 |
| Journal | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
| Volume | 2023-August |
| DOIs | |
| State | Published - 2023 |
| Event | 24th Annual conference of the International Speech Communication Association, Interspeech 2023 - Dublin, Ireland Duration: 20 Aug 2023 → 24 Aug 2023 |
Keywords
- clustering
- contrastive learning
- new intent discovery
Fingerprint
Dive into the research topics of 'Enhancing New Intent Discovery via Robust Neighbor-based Contrastive Learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver