Imagine you have a big hill and you want to reach the bottom. Gradient descent is like finding the best way to go down the hill step by step.
Now, let’s talk about the different types of gradient descent:
⭕Batch Gradient Descent: Imagine you have a group of friends, and you all want to go down the hill together. With batch gradient descent, you look at the entire hill, calculate the slope (how steep it is) in all directions, and then take a big step in the direction where the slope is steepest. This means you move in the direction that will take you down the hill the fastest.
⭕ Stochastic Gradient Descent: Now, let’s say you have a group of friends, but each friend can only see a small part of the hill. With stochastic gradient descent, each friend takes a look at their part of the hill, calculates the slope, and takes a step in the steepest direction. This process is repeated many times, and each friend takes turns updating their position based on their own part of the hill. It’s like everyone is exploring their own small area of the hill and helping each other find the fastest way down.
⭕ Mini-Batch Gradient Descent: Imagine you have a big group of friends, and you divide yourselves into smaller teams. Each team looks at a specific part of the hill and calculates the slope. Then, each team takes a step together in the steepest direction for their part of the hill. This process is repeated, and each team updates their position based on their own part of the hill. It’s like working together in smaller groups to explore and find the fastest way down the hill.
So, to summarize:
Batch gradient descent looks at the entire hill to find the steepest direction.
Stochastic gradient descent looks at a small part of the hill and updates the position individually.
Mini-batch gradient descent divides into teams, with each team looking at a specific part of the hill and updating the position together.
All these methods help us find the best way down the hill and reach the bottom as quickly as possible, just like how they help algorithms find the best solution in mathematical problems or optimize machine learning models.
Let’s dive into a more technical explanation of these concepts.
Batch Gradient Descent:
Batch gradient descent is an optimization algorithm used in machine learning and mathematical optimization. It works by calculating the gradients (the slope) of the objective function with respect to all the parameters in the model using the entire training dataset. The objective is to minimize the error or loss function.
Here’s how it works:
➤ Calculate the gradient of the loss function with respect to each parameter by considering all the training examples.
➤ Update the parameters in the direction of the negative gradient by taking a step proportional to the gradient magnitude and a learning rate.
➤ Repeat the process iteratively until convergence or a maximum number of iterations.
Batch gradient descent provides accurate gradient estimates since it considers the entire training dataset. However, it can be computationally expensive when the dataset is large, as it requires processing all examples for each parameter update.
Stochastic Gradient Descent:
Stochastic gradient descent (SGD) is another optimization algorithm commonly used in machine learning. Unlike batch gradient descent, it updates the model’s parameters after evaluating the loss function on a single training example at a time.
Here’s how it works:
➤ Randomly shuffle the training dataset.
➤ For each training example in the shuffled dataset:
>> Calculate the gradient of the loss function with respect to the parameters using only that single example.
>> Update the parameters in the direction of the negative gradient with a learning rate.
➤ Repeat the process iteratively for a certain number of epochs or until convergence.
SGD is computationally efficient since it processes one example at a time. However, it introduces more noise into the gradient estimate due to the random sampling of training examples. The noisy updates can help escape local minima but may make convergence slower.
Mini-Batch Gradient Descent:
Mini-batch gradient descent is a compromise between batch gradient descent and stochastic gradient descent. It updates the parameters using a small batch of training examples rather than the whole dataset or just one example.
Here’s how it works:
➤ Randomly shuffle the training dataset.
➤ Divide the shuffled dataset into multiple equally-sized mini-batches.
For each mini-batch:
>> Calculate the gradient of the loss function with respect to the parameters using the examples in the mini-batch.
>> Update the parameters in the direction of the negative gradient with a learning rate.
➤ Repeat the process iteratively for a certain number of epochs or until convergence.
Mini-batch gradient descent provides a balance between accuracy and computational efficiency. By using a small batch of examples, it approximates the gradient with less noise compared to stochastic gradient descent. It can leverage parallelism for faster computations and better convergence than batch gradient descent.
In summary, batch gradient descent considers the entire dataset, stochastic gradient descent processes one example at a time, and mini-batch gradient descent updates the parameters using small batches of examples. These algorithms are essential for optimizing models in machine learning and finding the optimal set of parameters for a given problem.
Here’s a simplified explanation of the Gradient Descent algorithm’s working: We select samples from the training dataset, feed them into the model, and assess the disparity between our results and the expected outcomes. This “error” is then used to compute the necessary changes to the model weights to enhance the results.
A key decision in this process involves the number of samples used per iteration to feed the model. There are three choices we can make:
- Use a single sample of data.
- Use all available data.
- Use a portion of the data.
When we use a single data sample per iteration, we call it “Stochastic Gradient Descent.” Essentially, the algorithm uses one sample to compute the updates.
“Batch Gradient Descent” refers to using the entire data set in a single iteration. After processing every sample, the algorithm takes the entire dataset to compute the updates.
Lastly, “Mini-Batch Gradient Descent” involves using a portion of the data — more than a single sample but less than the entire dataset. This algorithm operates like Batch Gradient Descent, the only difference being the number of samples used.