ml-402: Multivariate Linear Regression in Octave
Hi! I hope that you have been following the recent posts in the ML series where we have understood in much detail the various aspects of the linear regression ML algorithm, and also intuition about how it works. We have also formulated all the mathematical formulae, but have not seen how to implement them in octave yet. So that's what we are going to do today. Lets start with a small recap (of only the functions) of the multivariate linear regression ML algorithm:
hypothesis function is defined as:
Continue reading →ml-401: Multivariate Linear Regression
Hi! In the last post, we completed learning our very first machine learning algorithm, viz "linear regression". As you might remember from here, this is a type of "supervised learning" ML algorithm. However, during our study, we had only 1 feature (or input variable), so that what we learnt was really the "univariate linear regression". Today we will take a look at how to handle a more real life situation, that is when we have more than one feature. Such an algorithm is called the "multivariate linear regression" ML algorithm.
To recap, for univariate linear regression, the steps were:
Continue reading →ml-305: Univariate Linear Regression
In the last posts here and here, we looked in much depth at the "gradient descent" minimization algorithm. Today we will complete the puzzle by fitting that last piece in the whole jigsaw of the "univariate linear regression" ML algorithm that we have been studying in this 30x series.
To do that, lets recap what we have learnt so far:
Continue reading →ml-304: Gradient Descent and Learning Rate
Hi. In the last post, we learnt about the "gradient descent" algorithm that is used to minimize our cost function "J(Θ0, Θ1)". Gradient descent algorithm was defined as:
repeat until convergence {
Θj = Θj - α δ/δΘj J(Θ0, Θ1) (for j = 0, 1)
}
Continue reading →
ml-303: Gradient Descent
Hi, welcome back! I hope you have been following our machine learning algorithm series in the last few posts. If you have, then you know that till now we have developed the basic framework for a particular algorithm called "univariate linear regression" and the only step pending is to minimize our cost function "J(Θ0, Θ1)" so that we can find the optimal values of Θ0 and Θ1. Once we have these values, we can substitute them and arrive at the definition of our hypothesis function "h(x)" which will let us predict the values for our target variable "y".
I hope that things are clear till here. If they are, then lets move on to our last and final step of the algorithm, that is to minimize our cost function "J(Θ0, Θ1)". The general outline of the steps that we will follow to achieve this are:
Continue reading →