跳到主要导航 跳到搜索 跳到主要内容

Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Recent years pretrained language models (PLMs) hit a success on several downstream tasks, showing their power on modeling language. To better understand and leverage what PLMs have learned, several techniques have emerged to explore syntactic structures entailed by PLMs. However, few efforts have been made to explore grounding capabilities of PLMs, which are also essential. In this paper, we highlight the ability of PLMs to discover which token should be grounded to which concept, if combined with our proposed erasing-then-awakening approach. Empirical studies on four datasets demonstrate that our approach can awaken latent grounding which is understandable to human experts, even if it is not exposed to such labels during training. More importantly, our approach shows great potential to benefit downstream semantic parsing models. Taking text-to-SQL as a case study, we successfully couple our approach with two off-the-shelf parsers, obtaining an absolute improvement of up to 9.8%.

源语言英语
主期刊名Findings of the Association for Computational Linguistics
主期刊副标题ACL-IJCNLP 2021
编辑Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
出版商Association for Computational Linguistics (ACL)
1174-1189
页数16
ISBN(电子版)9781954085541
DOI
出版状态已出版 - 2021
活动Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 - Virtual, Online
期限: 1 8月 20216 8月 2021

出版系列

姓名Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

会议

会议Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Virtual, Online
时期1/08/216/08/21

指纹

探究 'Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing' 的科研主题。它们共同构成独一无二的指纹。

引用此