[ad_1]

Polynomial regression is basically like a linear regression algorithm that uses the relationship between the variable independent(x) and dependent(y) to find out the best relationship and to draw a line through the data points.

The Polynomial equation is as follows:

y= b0+b1x1+ b2x12+ b2x13+…… bnx1n

From the equation, it is being clear that if we add a degree to our linear equations, then they will be converted into Polynomial Linear equations.

The dataset which is being used in the Polynomial regression algorithm for training is non-linear.** **In this regression algorithm, the original features of the dataset are converted into Polynomial features of the required degree and after that, it is modeled using a linear model. Whenever data points are arranged in a non-linear fashion there is the main need for a polynomial regression model.

From the above images of both graphs, we have plotted a dataset that is arranged non-linearly. So if we try to apply it to a linear regression model, then it hardly covers any data point. On the other hand, a curve is suitable to cover most of the data points, which is through the Polynomial model. Hence, we can say *if the datasets are being arranged or plotted in a non-linear format, then we should apply the polynomial model instead of a linear model.*

Steps for performing Polynomial Regression:

1. Performing the Data Pre-processing

2. Building a Linear Regression model and fitting it to the dataset

3. Building a Polynomial Regression model and fitting it to the dataset

4. Visualizing the result for Linear Regression and Polynomial Regression model.

5. Predicting the output.

Python Code for performing Polynomial Regression on the dataset:

# importing all the required librariesimport numpy as npimport matplotlib.pyplot as pltimport pandas as pd# importing the datasetdf = pd.read_csv('data.csv')df.head()

`df.tail()`

X = df.iloc[:, 1:2].valuesy = df.iloc[:, 2].values# Fitting linear regression to the datasetfrom sklearn.linear_model import LinearRegressionlr = LinearRegression()lr.fit(X, y)# Fitting polynomial regression to the datasetfrom sklearn.preprocessing import PolynomialFeaturespoly = PolynomialFeatures(degree = 4)X_poly = poly.fit_transform(X)poly.fit(X_poly, y)lin2 = LinearRegression()lin2.fit(X_poly, y)# Visualising the Linear Regressionplt.scatter(X, y, color = 'blue')plt.plot(X, lr.predict(X), color = 'red')plt.title('Linear Regression')plt.xlabel('Emp_ID')plt.ylabel('Salary_Per_Day')plt.show()

# Visualising the Polynomial Regressionplt.scatter(X, y, color = 'blue')plt.plot(X, lin2.predict(poly.fit_transform(X)), color = 'red')plt.title('Polynomial Regression')plt.xlabel('Emp_ID')plt.ylabel('Salary_Per_Day')plt.show()

# Predicting a new result through Polynomial Regressionpred2 = 105pred2array = np.array([[pred2]])lin2.predict(poly.fit_transform(pred2array))

[ad_2]

Source link