【数据科学系统学习】机器学习算法 # 西瓜书学习记录 [12] 集成学习实践

本篇内容为《机器学习实战》第 7 章利用 AdaBoost 元算法提高分类性能程序清单。所用代码为 python3。


AdaBoost
优点:泛化错误率低,易编码,可以应用在大部分分类器上,无参数调整。
缺点:对离群点敏感。
适用数据类型:数值型和标称型数据。

boosting 方法拥有多个版本,这里将只关注其中一个最流行的版本 AdaBoost。


在构造 AdaBoost 的代码时,我们将首先通过一个简单数据集来确保在算法实现上一切就绪。使用如下的数据集:

def loadSimpData():
    datMat = matrix([[ 1. ,  2.1],
        [ 2. ,  1.1],
        [ 1.3,  1. ],
        [ 1. ,  1. ],
        [ 2. ,  1. ]])
    classLabels = [1.0, 1.0, -1.0, -1.0, 1.0]
    return datMat,classLabels

在 python 提示符下,执行代码加载数据集:

>>> import adaboost
>>> datMat, classLabels=adaboost.loadSimpData()

我们先给出函数buildStump()的伪代码:
【数据科学系统学习】机器学习算法 # 西瓜书学习记录 [12] 集成学习实践

程序清单 7-1 单层决策树生成函数

'''
Created on Sep 20, 2018

@author: yufei
Adaboost is short for Adaptive Boosting
'''

"""
测试是否有某个值小于或大于我们正在测试的阈值
"""
def stumpClassify(dataMatrix,dimen,threshVal,threshIneq):#just classify the data
    retArray = ones((shape(dataMatrix)[0],1))
    if threshIneq == 'lt':
        retArray[dataMatrix[:,dimen] <= threshVal] = -1.0
    else:
        retArray[dataMatrix[:,dimen] > threshVal] = -1.0
    return retArray


"""
在一个加权数据集中循环
buildStump()将会遍历stumpClassify()函数所有的可能输入值
并找到具有最低错误率的单层决策树
"""
def buildStump(dataArr,classLabels,D):
    dataMatrix = mat(dataArr); labelMat = mat(classLabels).T
    m,n = shape(dataMatrix)
    # 变量 numSteps 用于在特征的所有可能值上进行遍历
    numSteps = 10.0
    # 创建一个空字典,用于存储给定权重向量 D 时所得到的最佳单层决策树的相关信息
    bestStump = {}; bestClasEst = mat(zeros((m,1)))
    # 初始化为正无穷大,之后用于寻找可能的最小错误率
    minError = inf

    # 第一层循环在数据集的所有特征上遍历
    for i in range(n):#loop over all dimensions
        rangeMin = dataMatrix[:,i].min(); rangeMax = dataMatrix[:,i].max();
        # 计算步长
        stepSize = (rangeMax-rangeMin)/numSteps
        # 第二层循环是了解步长后再在这些值上遍历
        for j in range(-1,int(numSteps)+1):#loop over all range in current dimension
            # 第三个循环是在大于和小于之间切换不等式
            for inequal in ['lt', 'gt']: #go over less than and greater than
                threshVal = (rangeMin + float(j) * stepSize)
                # 调用 stumpClassify() 函数,返回分类预测结果
                predictedVals = stumpClassify(dataMatrix,i,threshVal,inequal)#call stump classify with i, j, lessThan
                errArr = mat(ones((m,1)))
                errArr[predictedVals == labelMat] = 0
                weightedError = D.T*errArr  #calc total error multiplied by D
                # print("split: dim %d, thresh %.2f, thresh ineqal: %s, the weighted error is %.3f" % (i, threshVal, inequal, weightedError))
                
                # 将当前错误率与已有的最小错误率进行比较
                if weightedError < minError:
                    minError = weightedError
                    bestClasEst = predictedVals.copy()
                    bestStump['dim'] = i
                    bestStump['thresh'] = threshVal
                    bestStump['ineq'] = inequal
    return bestStump,minError,bestClasEst

为了解实际运行过程,在 python 提示符下,执行代码并得到结果:

>>> D=mat(ones((5,1))/5)
>>> adaboost.buildStump(datMat, classLabels, D)
split: dim 0, thresh 0.90, thresh ineqal: lt, the weighted error is 0.400
split: dim 0, thresh 0.90, thresh ineqal: gt, the weighted error is 0.600
split: dim 0, thresh 1.00, thresh ineqal: lt, the weighted error is 0.400
split: dim 0, thresh 1.00, thresh ineqal: gt, the weighted error is 0.600
split: dim 0, thresh 1.10, thresh ineqal: lt, the weighted error is 0.400
split: dim 0, thresh 1.10, thresh ineqal: gt, the weighted error is 0.600
split: dim 0, thresh 1.20, thresh ineqal: lt, the weighted error is 0.400
split: dim 0, thresh 1.20, thresh ineqal: gt, the weighted error is 0.600
split: dim 0, thresh 1.30, thresh ineqal: lt, the weighted error is 0.200
split: dim 0, thresh 1.30, thresh ineqal: gt, the weighted error is 0.800
split: dim 0, thresh 1.40, thresh ineqal: lt, the weighted error is 0.200
split: dim 0, thresh 1.40, thresh ineqal: gt, the weighted error is 0.800
split: dim 0, thresh 1.50, thresh ineqal: lt, the weighted error is 0.200
split: dim 0, thresh 1.50, thresh ineqal: gt, the weighted error is 0.800
split: dim 0, thresh 1.60, thresh ineqal: lt, the weighted error is 0.200
split: dim 0, thresh 1.60, thresh ineqal: gt, the weighted error is 0.800
split: dim 0, thresh 1.70, thresh ineqal: lt, the weighted error is 0.200
split: dim 0, thresh 1.70, thresh ineqal: gt, the weighted error is 0.800
split: dim 0, thresh 1.80, thresh ineqal: lt, the weighted error is 0.200
split: dim 0, thresh 1.80, thresh ineqal: gt, the weighted error is 0.800
split: dim 0, thresh 1.90, thresh ineqal: lt, the weighted error is 0.200
split: dim 0, thresh 1.90, thresh ineqal: gt, the weighted error is 0.800
split: dim 0, thresh 2.00, thresh ineqal: lt, the weighted error is 0.600
split: dim 0, thresh 2.00, thresh ineqal: gt, the weighted error is 0.400
split: dim 1, thresh 0.89, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 0.89, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 1.00, thresh ineqal: lt, the weighted error is 0.200
split: dim 1, thresh 1.00, thresh ineqal: gt, the weighted error is 0.800
split: dim 1, thresh 1.11, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 1.11, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 1.22, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 1.22, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 1.33, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 1.33, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 1.44, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 1.44, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 1.55, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 1.55, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 1.66, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 1.66, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 1.77, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 1.77, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 1.88, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 1.88, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 1.99, thresh ineqal: lt, the weighted error is 0.400
split: dim 1, thresh 1.99, thresh ineqal: gt, the weighted error is 0.600
split: dim 1, thresh 2.10, thresh ineqal: lt, the weighted error is 0.600
split: dim 1, thresh 2.10, thresh ineqal: gt, the weighted error is 0.400
({'dim': 0, 'thresh': 1.3, 'ineq': 'lt'}, matrix([[0.2]]), array([[-1.],
       [ 1.],
       [-1.],
       [-1.],
       [ 1.]]))

这一行可以注释掉,这里为了理解函数的运行而打印出来。

将当前错误率与已有的最小错误率进行对比后,如果当前的值较小,那么就在字典baseStump中保存该单层决策树。字典、错误率和类别估计值都会返回给 AdaBoost 算法。

上述,我们已经构建了单层决策树,得到了弱学习器。接下来,我们将使用多个弱分类器来构建 AdaBoost 代码。


首先给出整个实现的伪代码如下:

【数据科学系统学习】机器学习算法 # 西瓜书学习记录 [12] 集成学习实践

程序清单 7-2 基于单层决策树的 AdaBoost 训练过程

'''
输入参数:数据集、类别标签、迭代次数(需要用户指定)
'''
def adaBoostTrainDS(dataArr,classLabels,numIt=40):
    weakClassArr = []
    m = shape(dataArr)[0]
    # 向量 D 包含了每个数据点的权重,初始化为 1/m
    D = mat(ones((m,1))/m)   #init D to all equal
    # 记录每个数据点的类别估计累计值
    aggClassEst = mat(zeros((m,1)))
    for i in range(numIt):
        # 调用 buildStump() 函数建立一个单层决策树
        bestStump,error,classEst = buildStump(dataArr,classLabels,D)#build Stump

        print ("D:",D.T)

        # 计算 alpha,本次单层决策树输出结果的权重
        # 确保没有错误时不会发生除零溢出
        alpha = float(0.5*log((1.0-error)/max(error,1e-16)))#calc alpha, throw in max(error,eps) to account for error=0
        bestStump['alpha'] = alpha
        weakClassArr.append(bestStump)                  #store Stump Params in Array

        print("classEst: ",classEst.T)

        # 为下一次迭代计算 D
        expon = multiply(-1*alpha*mat(classLabels).T,classEst) #exponent for D calc, getting messy
        D = multiply(D,exp(expon))                              #Calc New D for next iteration
        D = D/D.sum()
        #calc training error of all classifiers, if this is 0 quit for loop early (use break)
        # 错误率累加计算
        aggClassEst += alpha*classEst
        print("aggClassEst: ",aggClassEst.T)
        # 为了得到二值分类结果调用 sign() 函数
        aggErrors = multiply(sign(aggClassEst) != mat(classLabels).T,ones((m,1)))
        errorRate = aggErrors.sum()/m
        print ("total error: ",errorRate)
        # 若总错误率为 0,则中止 for 循环
        if errorRate == 0.0: break
    return weakClassArr,aggClassEst

在 python 提示符下,执行代码并得到结果:

>>> classifierArray = adaboost.adaBoostTrainDS(datMat, classLabels, 9)
D: [[0.2 0.2 0.2 0.2 0.2]]
classEst:  [[-1.  1. -1. -1.  1.]]
aggClassEst:  [[-0.69314718  0.69314718 -0.69314718 -0.69314718  0.69314718]]
total error:  0.2
D: [[0.5   0.125 0.125 0.125 0.125]]
classEst:  [[ 1.  1. -1. -1. -1.]]
aggClassEst:  [[ 0.27980789  1.66610226 -1.66610226 -1.66610226 -0.27980789]]
total error:  0.2
D: [[0.28571429 0.07142857 0.07142857 0.07142857 0.5       ]]
classEst:  [[1. 1. 1. 1. 1.]]
aggClassEst:  [[ 1.17568763  2.56198199 -0.77022252 -0.77022252  0.61607184]]
total error:  0.0

最后,我们来观察测试错误率。


程序清单 7-3 AdaBoost 分类函数

'''
将弱分类器的训练过程从程序中抽查来,应用到某个具体的实例上去。

datToClass: 一个或多个待分类样例
classifierArr: 多个弱分类器组成的数组

返回 aggClassEst 符号,大于 0 返回1;小于 0 返回 -1
'''
def adaClassify(datToClass,classifierArr):
    dataMatrix = mat(datToClass)#do stuff similar to last aggClassEst in adaBoostTrainDS
    m = shape(dataMatrix)[0]
    aggClassEst = mat(zeros((m,1)))
    for i in range(len(classifierArr)):
        classEst = stumpClassify(dataMatrix, classifierArr[0][i]['dim'], classifierArr[0][i]['thresh'],
        classifierArr[0][i]['ineq'])
        aggClassEst += classifierArr[0][i]['alpha']*classEst
        print (aggClassEst)
    return sign(aggClassEst)

在 python 提示符下,执行代码并得到结果:

>>> datArr, labelArr = adaboost.loadSimpData()
>>> classifierArr = adaboost.adaBoostTrainDS(datArr, labelArr, 30)
D: [[0.2 0.2 0.2 0.2 0.2]]
classEst:  [[-1.  1. -1. -1.  1.]]
aggClassEst:  [[-0.69314718  0.69314718 -0.69314718 -0.69314718  0.69314718]]
total error:  0.2
D: [[0.5   0.125 0.125 0.125 0.125]]
classEst:  [[ 1.  1. -1. -1. -1.]]
aggClassEst:  [[ 0.27980789  1.66610226 -1.66610226 -1.66610226 -0.27980789]]
total error:  0.2
D: [[0.28571429 0.07142857 0.07142857 0.07142857 0.5       ]]
classEst:  [[1. 1. 1. 1. 1.]]
aggClassEst:  [[ 1.17568763  2.56198199 -0.77022252 -0.77022252  0.61607184]]
total error:  0.0

输入以下命令进行分类:

>>> adaboost.adaClassify([0,0], classifierArr)
[[-0.69314718]]
[[-1.66610226]]
matrix([[-1.]])

随着迭代的进行,数据点 [0,0] 的分类结果越来越强。也可以在其它点上分类:

>>> adaboost.adaClassify([[5,5],[0,0]], classifierArr)
[[ 0.69314718]
 [-0.69314718]]
[[ 1.66610226]
 [-1.66610226]]
matrix([[ 1.],
        [-1.]])

这两个点的分类结果也会随着迭代的进行而越来越强。


参考链接:
GBDT,ADABOOSTING概念区分 GBDT与XGBOOST区别
【机器学习实战-python3】Adaboost元算法提高分类性能

$$$$

不足之处,欢迎指正。

相关推荐