eGitty

Discover The Most Popular Algorithms

Contrastive Learning in NLP

Contrastive learning in the latent space has recently shown great promise, which aims to make the representation of a given anchor data to be similar to its positive pairs and dissimilar to its negative pairs.

Various contrastive learning approaches have been developed to deal with natural language processing tasks, including unsupervised text representation learning (Giorgi et al., 2021), text classification (Qiu et al., 2021), and text clustering (Zhang et al., 2021).

More recently, Li et al. (2021) presented prototypical contrastive learning and a ProtoNCE loss to encourage representations to be closer to their assigned prototypes. However, this method only models the relationship between an anchor instance and its nearest prototype. On the other hand, You et al. (2020) proposed a graph contrastive learning framework based on graph data augmentation, which improves the graph representations for better generalizability and robustness. However, their approach ignores the relationships of edges regarding
the graph structures. 

Reference

  • John Giorgi, Osvald Nitski, Bo Wang, and Gary Bader. 2021. DeCLUTR: Deep contrastive learning for unsupervised textual representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 879–895, Online. Association for Computational Linguistics.
  • Yao Qiu, Jinchao Zhang, and Jie Zhou. 2021. Improving gradient-based adversarial training for text classification by contrastive learning and auto-encoder. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1698–1707, Online. Association for Computational Linguistics.
  • Dejiao Zhang, Feng Nan, Xiaokai Wei, Shang-Wen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew O. Arnold, and Bing Xiang. 2021. Supporting clustering with contrastive learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5419–5430, Online. Association for Computational Linguistics.
  • Junnan Li, Pan Zhou, Caiming Xiong, and Steven Hoi. 2021. Prototypical contrastive learning of unsupervised representations. In International Conference on Learning Representations.
  • Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33:5812–5823.

Leave a Reply

Your email address will not be published. Required fields are marked *