Role of Bias in Neural Networks

I know about the Gradient Descent & Back-propagation Theorem. What I didn't get is: When and how to use Bias?


Ex - When mapping the AND function, when I use 2 inputs and 1 output, it does not give the correct weights, however, when I use 3 inputs (1 of which is a bias), it gives the correct weights.

Answered by Yash raj

Bias is just like an intercept added in a linear equation. It is usually an additional parameter in the Neural Network which is used to adjust the output along with the weighted sum of the inputs to the neuron. That is why it is said to be neural network bias. Moreover, bias value allows you to shift the activation function to either right or left.

output = sum (weights * inputs) + bias

The output is calculated by multiplying the inputs with their weights and then passing it through an activation function like the Sigmoid function, etc. Here, bias acts like a constant which helps the model to fit the given data. The steepness of the Sigmoid depends on the weight of the inputs.

A simpler way to understand bias is through a constant c of a linear function

y =mx + c

However, it allows you to move the line down and up fitting the prediction with the data better. If the constant c is absent then the line will pass through the origin (0, 0) and you will get a poorer fit.



Your Answer

Interviews

Parent Categories