While SVM is used most often in classification scenarios, it can be extended to regression cases by allowing the user to provide a maximum margin of error allowed for any observation. Support Vector Regression (SVR) finds the optimal hyperplane in the feature space by minimizing the number of observations that fall beyond the provided margin of error. Instead of minimizing squared error, like in Least Squares, it minimizes the coefficient values subject to the maximum amount of error allowed, the latter of which is a hyper-parameter that can be tuned. As some points, such as outliers, might fall outside a respectable margin of error, another hyper-parameter called the slack parameter, or the deviation from the margin, can be tuned in the training process to provide an additional constraint to the objective function.
Explain how SVM can be used in regression problems
Help us improve this post by suggesting in comments below:
– modifications to the text, and infographics
– video resources that offer clear explanations for this question
– code snippets and case studies relevant to this concept
– online blogs, and research publications that are a “must read” on this topic
Partner Ad