What are common choices to use for kernels in SVM?

When using kernelized SVM, the kernel function must be specified. Common choices for kernels include:

  • Linear: The linear kernel is the simplest choice and works best when classes can be linearly separated. 
  • Polynomial: The polynomial kernel is a possible choice when data is not linearly separable and maps the data into higher dimensional space by taking a power of the original data. It requires the degree of the polynomial to be specified. 
  • Radial Basis (RBF): The RBF kernel projects the original data into infinite-dimensional space and is a common choice for non-linear decision boundaries. It requires a gamma parameter that controls the influence carried by individual observations. It is often the default choice used when there is no prior knowledge about the decision boundary. 
  • Sigmoid: The sigmoid kernel represents the original feature space in the same way as a neural network perceptron model with a tanh activation function, but in complex non-linear decision boundaries, the RBF kernel is usually preferred. 

Author

Help us improve this post by suggesting in comments below:

– modifications to the text, and infographics
– video resources that offer clear explanations for this question
– code snippets and case studies relevant to this concept
– online blogs, and research publications that are a “must read” on this topic

Leave the first comment

Partner Ad
Find out all the ways that you can
Contribute