ml-406: Normal Equation -- part - 2
Hello there! Welcome back. In the last post we took a look at another algorithm, viz "normal equation" to solve the linear regression ML problem. However that was, maybe, too theoretical in nature. Today we will take a concrete example and see how to implement the solution for the same in octave. Are you ready? If so, lets move on…
The concrete example for today is as follows (including the pivot feature x0):
Continue reading →ml-405: Normal Equation -- part - 1
Hello! Today we will learn an amazing ML algorithm that allows us to find the hypothesis function for a regression problem in just 1 step. Sounds exciting? So lets move forward with full force!
As we already know from here, the steps to solve a regression problem are:
Continue reading →ml-404: Feature Choice and Polynomial Regression
Hi. It's good to see you once more. Today we will learn a particular technique to choose features that might improve the performance (in terms of prediction accuracy) of the regression algorithm, and in turn look at a subset type of regression that results from it.
In one of the initial posts when we started learning the linear regression ML algorithm, we had used the following data
Continue reading →ml-403: Feature Scaling
Hello. Welcome back! I hope that you are enjoying this series on machine learning, especially the last 2 posts here and here where we implemented the "multivariate linear regression" ML algorithm and even improved its performance. In this post we will look at one more technique which should be applied to the multivariate version of the linear regression ML algorithm to improve its performance, viz "feature scaling". The reason why I say that this technique should be used for the "multivariate" version, is because by using this technique we solve a certain problem from which the "univariate" version does not suffer.
Lets take a concrete example to throw more light on what I am trying to say. For this, let's consider this data, a small snippet of which is shown below:
Continue reading →ml-203: Vectorization in Octave
I hope you have noticed that we have jumped back to the 20x series from the last post which was in the 30x series, and wondered why are we moving backwards ;). Well, that is because in the last 20x post I had said that there is still one thing left about octave that we have not studied yet, but which we will study after completing one ML algorithm. So lets get onto that.
But before we move ahead, I urge you to take a look at our final program for "multivariate linear regression" ML algorithm implementation in the previous post, because our today's discussion is all about how we can improve it's performance by making use of the "vectorization" property in octave.
Continue reading →