Laplace smoothing addresses the issue encountered in NLP tasks, such as text classification with Naive Bayes, when certain words only occur in the test dataset and were not seen in training. It adds a small correction to the numerator and denominator of the posterior probability of a word belonging to each class to avoid the likelihood for a given class resulting in 0.
What is Laplace Smoothing? What is Additive Smoothing? Why do we need smoothing in IDF?
Laplace smoothing addresses the issue encountered in NLP tasks, such as text classification with Naive Bayes, when certain words only occur in the test dataset and were not seen in training. It adds a small correction to the numerator and denominator of the posterior probability of a word belonging to each class to avoid the likelihood for a given class resulting in 0.
Help us improve this post by suggesting in comments below:
– modifications to the text, and infographics
– video resources that offer clear explanations for this question
– code snippets and case studies relevant to this concept
– online blogs, and research publications that are a “must read” on this topic
Partner Ad