Pseudo label method is a simple but efficient way to improve text classification, it is a semi-supervised learning method.
When building an AI model, we often use an auxiliary task to improve our main task performance. In this article, we will introduce three basic models.
In this article, we will introduce a method that use word or sentence similarity matrix for classification.
Contrastive learning is a supervised method, which is a good way to improve the text clustering. In this tutorial, we will introduce what it is.
Data Augmentation can boost the performance of an intent classifier. In this post, we will introduce a data augmentation method.
In NLP, we do not mask any input embeddings for Bert in text classifcation task. However, in paper: Spelling Error Correction with Soft-Masked BERT proposed a masked method.
Conditional Layer Normalization can allow us to normalize a representation based on different targets or features.
In this article, we will introduce a new metric for sentence similarity – BertScore, which has better performance than cosine similarity.
In this article, we will introduce how to computing self-attention with relative position representations in deep learning.
Parametrization strategy can make the optimization more stable and improve the efficiency when file-tuning a model.