Activation functions are mathematical functions applied to the output of each neuron (node) in a neural network. They determine whether the neuron should be activated or not, i.e., whether the information should pass through the neuron or not. Activation functions introduce non-linear transformations to the data, enabling the neural network to approximate complex functions and learn meaningful representations from the data.
Without activation functions, the neural network would be equivalent to a simple linear model, resulting in limited representation capabilities and reduced learning power.
Common Activation Functions:
Let's explore some of the most commonly used activation functions in deep learning:
Sigmoid Function:
The sigmoid function, also known as the logistic function, is a popular activation function that maps the input to a value between 0 and 1. It is defined as:
sigmoid(x) = 1 / (1 + exp(-x))
The sigmoid function is differentiable, which makes it suitable for gradient-based optimization methods like backpropagation. However, it has some limitations, such as vanishing gradients for extreme values, which can slow down the learning process.
Python Implementation:
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
ReLU (Rectified Linear Unit):
ReLU is one of the most popular activation functions due to its simplicity and efficiency. It replaces all negative values with zero and leaves positive values unchanged. The function can be defined as:
ReLU(x) = max(0, x)
ReLU effectively addresses the vanishing gradient problem experienced by sigmoid. However, it can cause the "dying ReLU" problem, where neurons output zero for all inputs and cease to learn.
Python Implementation:
def relu(x):
return np.maximum(0, x)
Leaky ReLU:
To address the "dying ReLU" problem, the Leaky ReLU was introduced. It is similar to ReLU but allows a small negative slope for negative inputs. The function can be defined as:
Leaky ReLU(x) = max(alpha * x, x)
Where alpha
is a small positive constant (typically a small value like 0.01).
Python Implementation:
def leaky_relu(x, alpha=0.01):
return np.maximum(alpha * x, x)
Tanh (Hyperbolic Tangent):
The tanh function is another sigmoidal activation function that maps input values to the range (-1, 1). It is defined as:
tanh(x) = (2 / (1 + exp(-2x))) - 1
Like sigmoid, it can also suffer from vanishing gradients, but it is zero-centered, making it more suitable for certain applications.
Python Implementation:
def tanh(x):
return np.tanh(x)
Softmax:
The softmax function is often used as the activation function for the output layer of a neural network when dealing with multi-class classification problems. It takes a vector of real numbers and converts them into a probability distribution. It is defined as:
softmax(x_i) = exp(x_i) / sum(exp(x_j)) for all j
Where x_i
and x_j
are elements of the input vector.
Python Implementation:
def softmax(x):
exp_x = np.exp(x - np.max(x)) # To avoid numerical instability
return exp_x / exp_x.sum(axis=0)
Activation Functions in Code:
Now, let's see how to use these activation functions in a simple neural network implementation using Python and the NumPy library.
import numpy as np
class NeuralNetwork:
def __init__(self):
self.weights = None
self.bias = None
def initialize_parameters(self, input_size, hidden_size, output_size):
# Randomly initialize weights and bias
self.weights = {
'hidden': np.random.randn(hidden_size, input_size),
'output': np.random.randn(output_size, hidden_size)
}
self.bias = {
'hidden': np.zeros((hidden_size, 1)),
'output': np.zeros((output_size, 1))
}
def forward_propagation(self, x):
# Hidden layer activation
hidden_output = np.dot(self.weights['hidden'], x) + self.bias['hidden']
hidden_activation = relu(hidden_output)
# Output layer activation
output = np.dot(self.weights['output'], hidden_activation) + self.bias['output']
output_activation = softmax(output)
return output_activation
# Example usage:
input_size = 784
hidden_size = 256
output_size = 10
nn = NeuralNetwork()
nn.initialize_parameters(input_size, hidden_size, output_size)
# Assuming 'x' is the input data
output = nn.forward_propagation(x)
Conclusion:
Activation functions are essential components of neural networks, allowing them to learn complex patterns and make accurate predictions. In this article, we explored some common activation functions, including sigmoid, ReLU, Leaky ReLU, tanh, and softmax, and implemented them using Python and NumPy.
Keep in mind that the choice of activation function depends on the problem at hand, and experimentation is crucial to find the most suitable activation function for your specific neural network architecture and dataset.