# Gradient Descent: Determining Minima Since 1847

A shallow study of on old algorithm used to minimize error in a function

**Gradient descent** is an iterative algorithm to determine the minimum of any function. In machine learning problems, this algorithm can be used in the process of minimization of the cost function in the linear regression technique.

For uni-variate function f(x)

The new value of x is iterated until we get the stable value for x and α determines the converging rate.

Now let's see how it helps to determine the minima of a function and the advantages we can take.

Suppose a function,

The function can be represented graphically as:

Let's now determine a tangent at any point of the equation.

As the two points (x, y) and (x', y') become closer and closer to each other, the difference between them tends to be zero. The slope of the line can be derived as:

The slope of the equation at any point is either negative, positive, or zero. α is the learning rate which is always positive.

The iterative function,

chooses the new value for x, such that it:

is left to the old value of x when the slope is positive,

is right when the slope is negative

and remains the same when the slope is 0

This process gradually takes us to the minimum of the function depending upon the learning rate with each iteration and continuously determining the value of x where y can be minimum.

Let's see how we can estimate a linear model which can predict the future values of any scattered data.

For that, we suppose a graph of data with any number of variables.

Let the red line be h(x). If h(x) is the best function that fits into the data points, then the error it generates while calculating the values for the y must be minimum.

Error is the distance between the actual value y and the value we estimated from our heuristic function h(x).

To give more importance to errors in specific data points, we can square the error term. The total sum of the square of errors is:

The cost function can be more generalized to

Now we need to determine the value for x1, x2, ... such that the value of the cost function is minimum, and we know how to get it. (ofc by Gradient Descent Algorithm)

repeat until convergence:

We should be careful while selecting the value of α. If α is very big, the algorithm may not converge. In some cases, it may even diverge.