Abstract
Support vector machines are a class of parametric supervised learning methods that can be used in both classification and regression. SVM can be seen as an extension to perceptrons, using a modified loss function to find a decision boundary that maximizes the gap between classes. SVMs are one of the most efficient algorithms available, and are still competitive with the current cutting edge.
It's a multi-part series in which I am planning to cover the following:
- What is SVM
- History of SVM
- Classic SVM
- Key Idea: maximizing the margin
- Hard Margin for linearly separable data
- Soft Margin for non-linearly separable data
- Lagrange Multiplier and Dual Formulation
- Why Support Vectors