Major advantages of support vector machines Major disadvantages of support vector machines
SVMs are robust to observations that are far away from the hyperplane and are efficient since they are based only on support vectors within the hyperplane. SVMs can be computationally intensive and require a large amount of memory to perform the estimation, especially if the data sets are large [@1396].
SVMs perform well in the “big p, small n” scenario – in other words, SVMs can successfully generate classifications in the presence of a large number of predictors even with a small number of cases in the data set. For nonlinear applications the user must select a kernel to be used by the SVM.  The choice of kernel and any associated hyperparameters that are required by the kernel must be carefully chosen, and an incorrect choice of kernel, especially, can negatively affect performance of the SVM [@1396].
SVMs can adapt to nonlinear decision/classification boundaries using the various kernel functions and provide solutions even when the data are not linearly separable. SVMs can seem like black boxes in that a final functional form or a table of coefficients for various predictors is not provided as part of the estimation.
SVMs provide a unique solution unlike other machine learning methods that rely on local minima such as neural networks.
Because SVMs are constructed using only the support vectors they may have better classification performance when applied to data that are unbalanced with respect to the binary outcome [@1388].