Aayaan Sahu
SEARCH
Tags
LINKS
According to Wikipedia, a neural network is a computing system inspired by the biological neural networks that constitute animal brains. These systems are made up of neurons, also called nodes, which are organized in layers. Together these layers form a network, called the model, that perform computations that help the general model learn.
A node is the smallest functioning part of a neural network. A node has an input, an output, and a set of weights that connect it to other nodes. Some nodes may have biases. Weights and biases are the tunable parameters of a model. A layer is a collection of nodes. Inputs and outputs pass through the layers of the network, till they reach the output layer, which is the final layer, which shows what the model predicted.
We have a simple neural network with layers: , , . is the input layer, which receives the input data. is the hidden layer, which receives the output of . is the output layer, which receives the output of .
Now we can declare the weights within this system. The weights are the hidden, tunable parameters of the system, that factor into the output of the system. Let's declare the naming of a weight as the connection between two nodes. For instance the weight between and is called .
Values propogate through the layers of this neural network, with operations by individual nodes, as they converge in the output node.
Let's find out how this works. Since layer is an input layer, it doesn't actually perform any computations. It will just pass values to the next layer. Let's figure out the value of the node . The value of is the sum of the weights multiplied by their corresponding input. If this seems confusing, a written out example will help. Let's say the values passed in to the input layer are , , . This means that , , . Let's declare the weights: , , . It's important to realize that the weights connect from all the nodes () in the input layer to one node in the hidden layer ().
If you've done any research on neural networks before, you might know that the nodes within the model also have what's called biases. Biases are also tunable parameters of the model.
This input layer doesn't have biases, since it just serves to pass inputs to next layer. To see how biases come into play, let's exapnd the example above. Let's say the bias for is .
All of the nodes in the hidden layers and output layer follow the same process. The final values at the output layer is what the model predicted.
There is much more to a neural network such as activation functions, loss functions, and optimizers. We will cover these in sections to come.