Artificial Neural Network: A beginner’s guide
ANN is often a confusing topic for most of the beginners in model building or machine learning.
Artificial Neural Network is a machine learning algorithm that is roughly modelled around what is currently known about how the human brain functions.
Neural network models the relationship between a set of input signals and an output. This process is similar to a biological brain response to stimuli from sensory inputs. The brain uses a network of interconnected cells called neurons to provide learning capability. ANN uses a network of artificial neurons or nodes to solve challenging learning problems.
Importance of learning neural networks
1. Ability to learn
· Neural Networks figure out how to perform their function on their own.
· Determine their function based only upon sample inputs.
2. Ability to generalize
· Produce outputs for inputs it has not been taught how to deal with.
· This can be easily retrained to changing environmental conditions
Neural network architecture
Neural network is made of layers with many interconnected nodes (neurons).There are three main layers, specifically input layer, hidden layer and output layer. There can be more than one hidden Layer. As the hidden layers increase the complexity of the neural network also increases.
Single Layered Network
In single layer network input nodes process the incoming data exactly as received .Here the network has only one set of connection weights (w1, w2, and w3) . Therefore it is termed as single-layer network.
In multilayer neural network , it adds one or more hidden layers that process the signals from the input nodes prior to reaching the output node.
Representation of a neuron
· w — weights
· n — number of inputs
· xi — input
· f(x) — activation function
· y(x) — output axon
A neuron is an information-processing unit that is fundamental to the operation of a neural network. Three basic elements of the neuron model are synaptic weights, combination (addition) function and activation function. External input bias is given to increase or lower the net input to the activation function.The neuron output can be expressed as fn ( 𝑤𝑖 ∗𝑥𝑖𝑛𝑖=1+𝑏𝑖𝑎𝑠) .fn is some activation function. Activation function is sometime also called transfer function . The output of the Activation Function is passed on to the neurons in the next layer and so on till the final output layer.
Activation function is a mechanism by which the artificial neuron processes incoming information and passes it throughout the network. Threshold activation function is called so as it results in an output signal only once a specified input threshold has been attained. Different types of activation functions are unit step activation function, sigmoid activation function, hyperbolic tangent activation function and rectified linear unit activation function.
a. Unit step activation function
Here in this particular function, neuron fires when the sum of input signals is at least zero. This is because its shape resembles a stair, it is sometimes called a unit step activation function.
b. Sigmoid activation function
This is the most commonly used activation function. Unlike the step activation function, the output signal is no longer binary . The output values can fall anywhere in the range from zero to one. Sigmoid is differentiable which means that it is possible to calculate the derivative across the entire range of inputs .This feature is crucial for creating efficient ANN optimization algorithms.
c. Hyperbolic Tangent Activation Function
This function is also known as tanh, a transfer function. It is widely used in deep neural networks .Hyperbolic tangent activation function particularly serve as the activation function in recurrent neural networks. This is a shifted version of sigmoid activation. The output of this function has a wider range of values.
The synaptic weights of neurons are determined based on the neural net learning process (Learning Algorithm) . Most common measure of the error (cost function) is mean square error, E = (y — d)² . Iterations of the above process of providing the network with an input and updating the network’s weight to reduce error is used to train the network.
Cost Function is a loss function, a function to be minimized . In ANN, the cost function is used to return a number to indicate how well the neural network performed to map training examples to correct output.
The examples of cost functions are quadratic cost function and cross-entropy cost function. Desirable properties of cost function for ANN are:
2. Cost function should tend to zero as actual output is close to the desired output.
3. Globally continuous and differentiable.
Learning rate is a hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. Choosing the learning rate is challenging. A small learning rate value may result in a long training process that could get stuck while a large value may result in learning a sub-optimal set of weights too fast or an unstable training process.
- Epoch: one epoch = one forward pass and one backward pass of allthe training examples.
- Batch size: batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you’ll need.
- number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes).
During my initial data science days, when I was new to ANN and deep learning, I took the help of a tensorflow playground site where we can play around neural networks. It is a highly interactive neural network playground where anyone who has the basic understanding of neural networks can use and understand the basic terminologies in a better way. This helps us to visually understand the behavior of neural networks on various selected parameters. I strongly suggest you to understand the basic concepts of neural networks before going to this site.