线性回归的三种比对方式

线性回归的三种求解方式python实现与sklearn和机器学习实战的结果对比

线性回归

线性回归作为最简单的模型,首先给出的是sklearn给出的使用随机梯度下降的方式进行求解,同时与机器学习实战中的线性回归同时与我自己写的基于迭代的方式计算得到的线性回归进行对比

1
2
3
4
5
6
7
8
9
10
"""
import numpy sklearn and matplotlib
"""
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model

1.机器学习的实战用最小二乘法的矩阵形式计算

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
'''
Created on Jan 8, 2011
@author: Peter
'''
from numpy import *
def loadDataSet(fileName):
numFeat = len(open(fileName).readline().split('\t')) - 1
dataMat = []; labelMat = []
fr = open(fileName)
for line in fr.readlines():
lineArr =[]
curLine = line.strip().split('\t')
for i in range(numFeat):
lineArr.append(float(curLine[i]))
dataMat.append(lineArr)
labelMat.append(float(curLine[-1]))
return dataMat,labelMat
def standRegres(xArr,yArr):
xMat = mat(xArr); yMat = mat(yArr).T
xTx = xMat.T*xMat
if linalg.det(xTx) == 0.0:
print("This matrix is singular, cannot do inverse")
return
ws = xTx.I * (xMat.T*yMat)
return ws
xArr, yArr = loadDataSet('./machinelearninginaction/Ch08/ex0.txt')
ws = standRegres(xArr, yArr)
ws = ws.reshape(2,1)
x_aix = np.array(xArr)[:,1]
y_turly_value = np.array(yArr)
y_pre = predict(xArr, ws)
plt.scatter(x_aix, y_turly_value, color='black')
plt.plot(x_aix, y_pre, color='blue', linewidth=3)
plt.show()

png

2.调用sklearn的Linear Regression的包得到的结果,可以看到和机器学习实战所得到的结果一致

1
2
3
4
5
6
7
8
9
10
11
12
13
regr = linear_model.LinearRegression()
regr.fit(xArr, yArr)
regr.coef_
regr.get_params
x_aix = np.array(xArr)[:,1]
y_turly_value = np.array(yArr)
y_pre = regr.predict(xArr)
plt.scatter(x_aix, y_turly_value, color='black')
plt.plot(x_aix, y_pre, color='blue', linewidth=3)
plt.show()

png

3.用迭代的方式计算线性回归的解

$$
\theta_j := \theta_j + \alpha (y^i - h(\theta))x_j
$$

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
def linear_regression(feature_arr, label_arr):
"""
use SGD method to optimize linear regression
:param feature_arr: the train feature
:param label_arr: the train label
"""
feature_num = len(feature_arr[0])
sample_num = len(feature_arr)
feature_arr = np.array(feature_arr)
para = np.random.rand(feature_num, 1)
learning_rate = 0.01
loss = 0
iteration = 0
threshold = 0
while(iteration > 1000 or threshold < 1e-8):
last_loss = loss
iteration += 1
loss = 0
for x,y in zip(feature_arr, label_arr):
x = x.reshape(1,len(x))
loss += (np.dot(x, para) - y) ** 2
para += learning_rate * (y - np.dot(x, para)) * x.reshape(feature_num,1)
threshold = abs(loss - last_loss)
return para
def predict(feature_arr, para_vector):
np.zeros((len(feature_arr), 1))
predict_value = sum(feature_arr * para_vector, axis=1)
return predict_value
weight = linear_regression(xArr, yArr)
weight = weight.reshape(2,)
x_aix = np.array(xArr)[:,1]
y_turly_value = np.array(yArr)
y_pre = predict(xArr, weight)
plt.scatter(x_aix, y_turly_value, color='black')
plt.plot(x_aix, y_pre, color='blue', linewidth=3)
plt.show()
Created on Jan 8, 2011

@author: Peter

png

通过上面的例子可以看出来,使用随机梯度下降的方法不能够像用最小二乘法那样得到一个稳定的固定的解,解是与初始化的权重有关的.但是在整体趋势上是与其一致的

当前网速较慢或者你使用的浏览器不支持博客特定功能,请尝试刷新或换用Chrome、Firefox等现代浏览器