Major advantages of neural nets Major disadvantages of neural nets
By combining multiple layers, instead of
considering only a single logistic regression
function, for example, neural networks are able to
learn non-linear separations between the different
categories of prediction and are capable of
learning very complex concepts and patterns (e.g.
image recognition) that are often too difficult
for other machine learning approaches.
Neural networks are relatively opaque "black
boxes". Because neural networks are created using
a large number of different weights learned between
the different neurons, combined with their
separation across layers, it can be incredibly
difficult for a human to interpret how any given
prediction was made.
Neural networks are quite robust at handling
noisy data.
Relatedly, because neural networks rely on
subsequent learning across many layers, it is also
difficult to determine what importance each input
variable has on the eventual prediction. What
could be a significant predictor in the input
layer may be down-weighted in a subsequent layer,
for example.
Neural networks are nonparametric methods and do
not require distributional assumptions or model
forms to be specified prior to their construction.
Depending on the complexity of the predictive
task, the neural network can require extensive
training. In some cases this could mean larger
amounts of data are required to apply them.
Neural networks are extendable in that they can be
stacked together to learn more complex
abstractions to aid in prediction, as described
above with respect to "deep learning".
Neural networks may require greater computational
resources and time compared to other machine
learning methods.