Both QDA and GMM are based on the premise that the underlying data is generated from two or more Gaussian distributions, but QDA is usually used in the supervised learning context, while GMM is used in the unsupervised context. Basically, QDA assumes the observations from each class are drawn from a Gaussian distribution, where each class has a separate covariance structure, while GMM assumes the underlying data is generated from an unknown number of Gaussian distributions.
What is the difference between QDA and Gaussian Mixture Models (GMM)?
Help us improve this post by suggesting in comments below:
– modifications to the text, and infographics
– video resources that offer clear explanations for this question
– code snippets and case studies relevant to this concept
– online blogs, and research publications that are a “must read” on this topic
Partner Ad