eGitty

Discover The Most Popular Algorithms

Zero-Shot Stance Detection

Zero-shot stance detection (ZSSD) aims to detect stance for destination unseen targets by learning stance features from known targets (Allaway and McKeown, 2020).

To deal with zero-shot stance detection, Allaway and McKeown (2020) created a new dataset consisting of a large range of topics covering broad themes, called Varied Stance Topics (VAST). Based on it, they proposed a topic-grouped attention model to implicitly capture relationships between targets by using generalized topic representations. Allaway et al. (2021) adopted a target specific stance detection dataset (Mohammad et al., 2016) and deployed adversarial learning to extract target-invariant transformation features in ZSSD. More recently, to exploit both the structural-level and semantic-level information of the relational knowledge, Liu et al. (2021) proposed a commonsense knowledge enhanced graph model based on BERT (Devlin et al., 2019) to tackle ZSSD.

Reference

  • Emily Allaway and Kathleen McKeown. 2020. ZeroShot Stance Detection: A Dataset and Model using Generalized Topic Representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8913–8931, Online. Association for Computational Linguistics.
  • Emily Allaway, Malavika Srikanth, and Kathleen McKeown. 2021. Adversarial learning for zero-shot stance detection on social media. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4756–4767, Online. Association for Computational Linguistics.
  • Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31–41, San Diego, California. Association for Computational Linguistics.
  • Rui Liu, Zheng Lin, Yutong Tan, and Weiping Wang. 2021. Enhancing zero-shot and few-shot stance detection with commonsense knowledge graph. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3152–3157, Online. Association for Computational Linguistics.
  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.