What is neural network ?
Artificial neural networks are inspired by the organic brain translated to the computer and yet its not good comparison but there are neurons, activations, loss but the underlying process is quite deferent
A single neuron alone cant do anything, but when it combined with other neurons ( usually thousands or millions or more ) it gives a result that outperform any other Machine learning algorithm
This neural networks contain these parameters
And through these parameters the model has to learn to adjust an order to produce the desired outputs
The neural networks considered as a black box, we can understand how they reach these result but its difficult to know the “why”
The Dense Layer:
is a neural network layer that is connected deeply, which means each neuron in the dense layer receives input from all neurons of its previous layer. The dense layer is found to be the most commonly used layer in the models.
In the background, the dense layer performs a matrix-vector multiplication. The values used in the matrix are actually parameters that can be trained and updated with the help of backpropagation.
The output generated by the dense layer is an ‘m’ dimensional vector. Thus, dense layer is basically used for changing the dimensions of the vector. Dense layers also applies operations like rotation, scaling, translation on the vector.
each neuron of a given layer is connected to every neuron of the next layer and this mean that the output values of the layer will become the input values of the next layer, each connection between neurons has a two trainable factors or parameters and they are weight and bias, weight get multiplied with the input value and once all the inputs and weights flow in the neuron, they summed and the bias is added to them, the bias offset the output, which helps to map more real world dynamic data
Examples on how bias and weights works:
We can see how weights and bias effect the output and this will make more sense when we come to explain the activation functions later
Now lets implement the first neural network with weights and bias with python
1- Output = sum (inputs * weights) + Bias
2- Output = activation(Output)
Coding A layer
The dot product:
The implementation of a simple dot product:
Now the layer with dot product:
Now to make things easier we will use Numpy
Here we have learned how to implement very basic simple neural network layer and of course this layer will not learn or predict any thing because its only one step and one small part of the whole neural network architecture
We will go through all of the steps easily in the coming days
** most of the code and explanations is from the amazing book of neural network from scratch by Harrison kinsley and Daniel Kukiela
neural network from scratch (nnfs.io)