Understanding different Loss Functions for Neural Networks
The Loss Function is one of the important components of Neural Networks. Loss is nothing but a prediction error of Neural Net. And the method to calculate the loss is called Loss Function.
In simple words, the Loss is used to calculate the gradients. And gradients are used to update the weights of the Neural Net. This is how a Neural Net is trained.
Keras and Tensorflow have various inbuilt loss functions for different objectives. In this guide, I will be covering the following essential loss functions, which could be used for most of the objectives.
- Mean Squared Error (MSE)
- Binary Crossentropy (BCE)
- Categorical Crossentropy (CC)
- Sparse Categorical Crossentropy (SCC)
Mean Squared Error
MSE loss is used for regression tasks. As the name suggests, this loss is calculated by taking the mean of squared differences between actual(target) and predicted values.
Example
For Example, we have a neural network which takes house data and predicts house price. In this case, you can use the MSE
loss. Basically, in the case where the output is a real number, you should use this loss function.