【实验目的】
- 理解决策树算法原理,掌握决策树算法框架;
- 理解决策树学习算法的特征选择、树的生成和树的剪枝;
- 能根据不同的数据类型,选择不同的决策树算法;
- 针对特定应用场景及数据,能应用决策树算法解决实际问题。
【实验内容】
- 设计算法实现熵、经验条件熵、信息增益等方法。
- 针对给定的房贷数据集(数据集表格见附录1)实现ID3算法。
- 熟悉sklearn库中的决策树算法;
- 针对iris数据集,应用sklearn的决策树算法进行类别预测。
【实验报告要求】
- 对照实验内容,撰写实验过程、算法及测试结果;
- 代码规范化:命名规则、注释;
- 查阅文献,讨论ID3、5算法的应用场景;
查询文献,分析决策树剪枝策略。
【附录1】
-
年龄 有工作 有自己的房子 信贷情况 类别 0 青年 否 否 一般 否 1 青年 否 否 好 否 2 青年 是 否 好 是 3 青年 是 是 一般 是 4 青年 否 否 一般 否 5 中年 否 否 一般 否 6 中年 否 否 好 否 7 中年 是 是 好 是 8 中年 否 是 非常好 是 9 中年 否 是 非常好 是 10 老年 否 是 非常好 是 11 老年 否 是 好 是 12 老年 是 否 好 是 13 老年 是 否 非常好 是 14 老年 否 否 一般 否
实验内容及结果
实验代码及截图
1.
1 import numpy as np 2 import pandas as pd 3 import matplotlib.pyplot as plt 4 %matplotlib inline 5 from sklearn.datasets import load_iris 6 from sklearn.model_selection import train_test_split 7 from collections import Counter 8 import math 9 from math import log 10 import pprint
2.
1 from sklearn.tree import DecisionTreeClassifier 2 from sklearn import preprocessing 3 import numpy as np 4 import pandas as pd 5 from sklearn import tree 6 import graphviz 7 features = ["年龄", "有工作", "有自己的房子", "信贷情况"] 8 X_train = pd.DataFrame([ 9 ["青年", "否", "否", "一般"], 10 ["青年", "否", "否", "好"], 11 ["青年", "是", "否", "好"], 12 ["青年", "是", "是", "一般"], 13 ["青年", "否", "否", "一般"], 14 ["中年", "否", "否", "一般"], 15 ["中年", "否", "否", "好"], 16 ["中年", "是", "是", "好"], 17 ["中年", "否", "是", "非常好"], 18 ["中年", "否", "是", "非常好"], 19 ["老年", "否", "是", "非常好"], 20 ["老年", "否", "是", "好"], 21 ["老年", "是", "否", "好"], 22 ["老年", "是", "否", "非常好"], 23 ["老年", "否", "否", "一般"] 24 ]) 25 y_train = pd.DataFrame(["否", "否", "是", "是", "否", 26 "否", "否", "是", "是", "是", 27 "是", "是", "是", "是", "否"]) 28 # 数据预处理 29 le_x = preprocessing.LabelEncoder() 30 le_x.fit(np.unique(X_train)) 31 X_train = X_train.apply(le_x.transform) 32 le_y = preprocessing.LabelEncoder() 33 le_y.fit(np.unique(y_train)) 34 y_train = y_train.apply(le_y.transform) 35 # 调用sklearn.DT建立训练模型 36 model_tree = DecisionTreeClassifier() 37 model_tree.fit(X_train, y_train) 38 # 可视化 39 dot_data = tree.export_graphviz(model_tree, out_file=None, 40 feature_names=features, 41 class_names=[str(k) for k in np.unique(y_train)], 42 filled=True, rounded=True, 43 special_characters=True) 44 graph = graphviz.Source(dot_data) 45 graph
3.
1 def create_data(): 2 datasets = [['青年', '否', '否', '一般', '否'], 3 ['青年', '否', '否', '好', '否'], 4 ['青年', '是', '否', '好', '是'], 5 ['青年', '是', '是', '一般', '是'], 6 ['青年', '否', '否', '一般', '否'], 7 ['中年', '否', '否', '一般', '否'], 8 ['中年', '否', '否', '好', '否'], 9 ['中年', '是', '是', '好', '是'], 10 ['中年', '否', '是', '非常好', '是'], 11 ['中年', '否', '是', '非常好', '是'], 12 ['老年', '否', '是', '非常好', '是'], 13 ['老年', '否', '是', '好', '是'], 14 ['老年', '是', '否', '好', '是'], 15 ['老年', '是', '否', '非常好', '是'], 16 ['老年', '否', '否', '一般', '否'], 17 ] 18 labels = [u'年龄', u'有工作', u'有自己的房子', u'信贷情况', u'类别'] 19 # 返回数据集和每个维度的名称 20 return datasets, labels
4. 1 datasets, labels = create_data()
5. 1 1 train_data = pd.DataFrame(datasets, columns=labels)
6. 1 train_data
7.
1 def calc_ent(datasets): 2 data_length = len(datasets) 3 label_count = {} 4 for i in range(data_length): 5 label = datasets[i][-1] 6 if label not in label_count: 7 label_count[label] = 0 8 label_count[label] += 1 9 ent = -sum([(p / data_length) * log(p / data_length, 2) 10 for p in label_count.values()]) 11 return ent 12 # def entropy(y): 13 # """ 14 # Entropy of a label sequence 15 # """ 16 # hist = np.bincount(y) 17 # ps = hist / np.sum(hist) 18 # return -np.sum([p * np.log2(p) for p in ps if p > 0]) 19 # 经验条件熵 20 def cond_ent(datasets, axis=0): 21 data_length = len(datasets) 22 feature_sets = {} 23 for i in range(data_length): 24 feature = datasets[i][axis] 25 if feature not in feature_sets: 26 feature_sets[feature] = [] 27 feature_sets[feature].append(datasets[i]) 28 cond_ent = sum( 29 [(len(p) / data_length) * calc_ent(p) for p in feature_sets.values()]) 30 return cond_ent 31 # 信息增益 32 def info_gain(ent, cond_ent): 33 return ent - cond_ent 34 35 def info_gain_train(datasets): 36 count = len(datasets[0]) - 1 37 ent = calc_ent(datasets) 38 # ent = entropy(datasets) 39 best_feature = [] 40 for c in range(count): 41 c_info_gain = info_gain(ent, cond_ent(datasets, axis=c)) 42 best_feature.append((c, c_info_gain)) 43 print('特征({}) - info_gain - {:.3f}'.format(labels[c], c_info_gain)) 44 # 比较大小 45 best_ = max(best_feature, key=lambda x: x[-1]) 46 return '特征({})的信息增益最大,选择为根节点特征'.format(labels[best_[0]]) 1a()
8. 1 info_gain_train(np.array(datasets))
9.
1 class Node: 2 def __init__(self, root=True, label=None, feature_name=None, feature=None): 3 self.root = root 4 self.label = label 5 self.feature_name = feature_name 6 self.feature = feature 7 self.tree = {} 8 self.result = { 9 'label:': self.label, 10 'feature': self.feature, 11 'tree': self.tree 12 } 13 def __repr__(self): 14 return '{}'.format(self.result) 15 def add_node(self, val, node): 16 self.tree[val] = node 17 def predict(self, features): 18 if self.root is True: 19 return self.label 20 return self.tree 21 class DTree: 22 def __init__(self, epsilon=0.1): 23 self.epsilon = epsilon 24 self._tree = {} 25 # 熵 26 @staticmethod 27 def calc_ent(datasets): 28 data_length = len(datasets) 29 label_count = {} 30 for i in range(data_length): 31 label = datasets[i][-1] 32 if label not in label_count: 33 label_count[label] = 0 34 label_count[label] += 1 35 ent = -sum([(p / data_length) * log(p / data_length, 2) 36 for p in label_count.values()]) 37 return ent 38 # 经验条件熵 39 def cond_ent(self, datasets, axis=0): 40 data_length = len(datasets) 41 feature_sets = {} 42 for i in range(data_length): 43 feature = datasets[i][axis] 44 if feature not in feature_sets: 45 feature_sets[feature] = [] 46 feature_sets[feature].append(datasets[i]) 47 cond_ent = sum([(len(p) / data_length) * self.calc_ent(p) 48 for p in feature_sets.values()]) 49 return cond_ent 50 51 # 信息增益 52 @staticmethod 53 def info_gain(ent, cond_ent): 54 return ent - cond_ent 55 56 def info_gain_train(self, datasets): 57 count = len(datasets[0]) - 1 58 ent = self.calc_ent(datasets) 59 best_feature = [] 60 for c in range(count): 61 c_info_gain = self.info_gain(ent, self.cond_ent(datasets, axis=c)) 62 best_feature.append((c, c_info_gain)) 63 # 比较大小 64 best_ = max(best_feature, key=lambda x: x[-1]) 65 return best_ 66 67 def train(self, train_data): 68 """ 69 input:数据集D(DataFrame格式),特征集A,阈值eta 70 output:决策树T 71 """ 72 _, y_train, features = train_data.iloc[:, : 73 -1], train_data.iloc[:, 74 -1], train_data.columns[: 75 -1] 76 # 1,若D中实例属于同一类Ck,则T为单节点树,并将类Ck作为结点的类标记,返回T 77 if len(y_train.value_counts()) == 1: 78 return Node(root=True, label=y_train.iloc[0]) 79 # 2, 若A为空,则T为单节点树,将D中实例树最大的类Ck作为该节点的类标记,返回T 80 if len(features) == 0: 81 return Node( 82 root=True, 83 label=y_train.value_counts().sort_values( 84 ascending=False).index[0]) 85 86 # 3,计算最大信息增益 同5.1,Ag为信息增益最大的特征 87 max_feature, max_info_gain = self.info_gain_train(np.array(train_data)) 88 max_feature_name = features[max_feature] 89 90 # 4,Ag的信息增益小于阈值eta,则置T为单节点树,并将D中是实例数最大的类Ck作为该节点的类标记,返 91 if max_info_gain < self.epsilon: 92 return Node( 93 root=True, 94 label=y_train.value_counts().sort_values( 95 ascending=False).index[0]) 96 # 5,构建Ag子集 97 node_tree = Node( 98 root=False, feature_name=max_feature_name, feature=max_feature) 99 100 feature_list = train_data[max_feature_name].value_counts().index 101 for f in feature_list: 102 sub_train_df = train_data.loc[train_data[max_feature_name] == 103 f].drop([max_feature_name], axis=1) 104 105 # 6, 递归生成树 106 sub_tree = self.train(sub_train_df) 107 node_tree.add_node(f, sub_tree) 108 # pprint.pprint(node_tree.tree) 109 return node_tree 110 111 def fit(self, train_data): 112 self._tree = self.train(train_data) 113 return self._tree 114 def predict(self, X_test): 115 return self._tree.predict(X_test)
10.
1 datasets, labels = create_data()
2 data_df = pd.DataFrame(datasets, columns=labels)
3dt = DTree()
4 tree = dt.fit(data_df)
11. 1 tree
12.
1 dt.predict(['老年', '否', '否', '一般'])
13.
1 def create_data(): 2 iris = load_iris() 3 df = pd.DataFrame(iris.data, columns=iris.feature_names) 4 df['label'] = iris.target 5 df.columns = [ 6 'sepal length', 'sepal width', 'petal length', 'petal width', 'label' 7 ] 8 data = np.array(df.iloc[:100, [0, 1, -1]]) 9 # print(data) 10 return data[:, :2], data[:, -1] 11 X, y = create_data() 12 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
14.
1 from sklearn.tree import DecisionTreeClassifier 2 from sklearn.tree import export_graphviz 3 import graphviz
15. 1 clf = DecisionTreeClassifier() 2 clf.fit(X_train, y_train,)
16. 1 DecisionTreeClassifier()
17. 1 clf.score(X_test, y_test)
18.
1 tree_pic = export_graphviz(clf, out_file="mytree.pdf") 2 with open('mytree.pdf') as f: 3 dot_graph = f.read()
19. 1 graphviz.Source(dot_graph)
20.
1 from sklearn.tree import DecisionTreeClassifier 2 from sklearn import preprocessing 3 import numpy as np 4 import pandas as pd 5 from sklearn import tree 6 import graphviz 7 features = ["年龄", "有工作", "有自己的房子", "信贷情况"] 8 X_train = pd.DataFrame([ 9 ["青年", "否", "否", "一般"], 10 ["青年", "否", "否", "好"], 11 ["青年", "是", "否", "好"], 12 ["青年", "是", "是", "一般"], 13 ["青年", "否", "否", "一般"], 14 ["中年", "否", "否", "一般"], 15 ["中年", "否", "否", "好"], 16 ["中年", "是", "是", "好"], 17 ["中年", "否", "是", "非常好"], 18 ["中年", "否", "是", "非常好"], 19 ["老年", "否", "是", "非常好"], 20 ["老年", "否", "是", "好"], 21 ["老年", "是", "否", "好"], 22 ["老年", "是", "否", "非常好"], 23 ["老年", "否", "否", "一般"] 24 ]) 25 y_train = pd.DataFrame(["否", "否", "是", "是", "否", 26 "否", "否", "是", "是", "是", 27 "是", "是", "是", "是", "否"]) 28 # 数据预处理 29 le_x = preprocessing.LabelEncoder() 30 le_x.fit(np.unique(X_train)) 31 X_train = X_train.apply(le_x.transform) 32 le_y = preprocessing.LabelEncoder() 33 le_y.fit(np.unique(y_train)) 34 y_train = y_train.apply(le_y.transform) 35 # 调用sklearn.DT建立训练模型 36 model_tree = DecisionTreeClassifier() 37 model_tree.fit(X_train, y_train) 38 # 可视化 39 dot_data = tree.export_graphviz(model_tree, out_file=None, 40 feature_names=features, 41 class_names=[str(k) for k in np.unique(y_train)], 42 filled=True, rounded=True, 43 special_characters=True) 44 graph = graphviz.Source(dot_data) 45 graph
21.
1 import numpy as np 2 class LeastSqRTree: 3 def __init__(self, train_X, y, epsilon): 4 # 训练集特征值 5 self.x = train_X 6 # 类别 7 self.y = y 8 # 特征总数 9 self.feature_count = train_X.shape[1] 10 # 损失阈值 11 self.epsilon = epsilon 12 # 回归树 13 self.tree = None 14 def _fit(self, x, y, feature_count, epsilon): 15 # 选择最优切分点变量j与切分点s 16 (j, s, minval, c1, c2) = self._divide(x, y, feature_count) 17 # 初始化树 18 tree = {"feature": j, "value": x[s, j], "left": None, "right": None} 19 if minval < self.epsilon or len(y[np.where(x[:, j] <= x[s, j])]) <= 1: 20 tree["left"] = c1 21 else: 22 tree["left"] = self._fit(x[np.where(x[:, j] <= x[s, j])], 23 y[np.where(x[:, j] <= x[s, j])], 24 self.feature_count, self.epsilon) 25 if minval < self.epsilon or len(y[np.where(x[:, j] > s)]) <= 1: 26 tree["right"] = c2 27 else: 28 tree["right"] = self._fit(x[np.where(x[:, j] > x[s, j])], 29 y[np.where(x[:, j] > x[s, j])], 30 self.feature_count, self.epsilon) 31 return tree 32 def fit(self): 33 self.tree = self._fit(self.x, self.y, self.feature_count, self.epsilon) 34 @staticmethod 35 def _divide(x, y, feature_count): 36 # 初始化损失误差 37 cost = np.zeros((feature_count, len(x))) 38 # 公式5.21 39 for i in range(feature_count): 40 for k in range(len(x)): 41 # k行i列的特征值 42 value = x[k, i] 43 y1 = y[np.where(x[:, i] <= value)] 44 c1 = np.mean(y1) 45 y2 = y[np.where(x[:, i] > value)] 46 c2 = np.mean(y2) 47 y1[:] = y1[:] - c1 48 y2[:] = y2[:] - c2 49 cost[i, k] = np.sum(y1 * y1) + np.sum(y2 * y2) 50 # 选取最优损失误差点 51 cost_index = np.where(cost == np.min(cost)) 52 # 选取第几个特征值 53 j = cost_index[0][0] 54 # 选取特征值的切分点 55 s = cost_index[1][0] 56 # 求两个区域的均值c1,c2 57 c1 = np.mean(y[np.where(x[:, j] <= x[s, j])]) 58 c2 = np.mean(y[np.where(x[:, j] > x[s, j])]) 59 return j, s, cost[cost_index], c1, c2
22.
1 train_X = np.array([[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]]).T 2 y = np.array([4.50, 4.75, 4.91, 5.34, 5.80, 7.05, 7.90, 8.23, 8.70, 9.00]) 3 4 model_tree = LeastSqRTree(train_X, y, .2) 5 model_tree.fit() 6 model_tree.tree
实验小结
1.讨论ID3、C4.5算法的应用场景:
ID3算法应用场景:
它的基础理论清晰,算法比较简单,学习能力较强,适于处理大规模的学习问题,是数据挖掘和知识发现领域中的一个很好的范例,为后来各学者提出优化算法奠定了理论基础。ID3算法特别在机器学习、知识发现和数据挖掘等领域得到了极大发展。
C4.5算法应用场景:
C4.5算法具有条理清晰,能处理连续型属性,防止过拟合,准确率较高和适用范围广等优点,是一个很有实用价值的决策树算法,可以用来分类,也可以用来回归。C4.5算法在机器学习、知识发现、金融分析、遥感影像分类、生产制造、分子生物学和数据挖掘等领域得到广泛应用。
2.分析决策树剪枝策略
(1)决策树的剪枝策略包括预剪枝和后剪枝。先对数据集划分成训练集和验证集,训练集用来决定树生成过程中每个结点划分所选择的属性;验证集在预剪枝中用于决定该结点是否有必要依据该属性进行展开,在后剪枝中用于判断该结点是否需要进行剪枝。
(2)预剪枝:是在决策树生成过程中,对树进行剪枝,提前结束树的分支生长。
常见的预剪枝策略有:限制深度、叶子节点个数、叶子节点样本数、信息增益量等。
限制深度:通过限制深度可以阻止决策树继续向下无限的分裂。
叶子节点个数:通过限制决策树最多只能包含多少个叶子节点来限制无限分裂。
叶子节点样本数:限制每个叶子节点至少包含多少个样本个数,因为决策树理论上可以分裂到每个叶子节点只有一个样本的野蛮状态。
信息增益量:通过信息增益量预剪枝具体指某个叶子节点分裂前,其信息增益为 G1,继续分裂后,信息增益变为了 G2,如果 G1 - G2 的值非常小,那就该节点就没必要继续分裂了。
(3)后剪枝:是先从训练集生成一颗完整的决策树,然后自底向上地对非叶结点进行考察,若将该结点对应的子树替换为叶结点能带来泛化性能提升,则将该子树替换为叶结点。
等式左侧代表最终损失,决策树最终损失越小越好。等式右侧分别为当前结点的熵或 Gini 系数,参数 α 由用户指定,Tleaf表示当前结点分裂后,产生的叶子节点个数。叶子节点越多,损失越大。α越大代表我们越不希望模型过拟合;反之α越小,则表示我们更希望在训练集追求好的结果,过不过拟合不是很在意。
以如下的决策树为例,说明后剪枝策略中的损失函数如何计算。
图中,红色节点在分列前的损失为:0.32 * 5 + α;分裂后的损失需要计算左子树(黄色)和右子树(蓝色)的 gini 系数之和:0 * 1 + 0 * 4 + α * 2。
3.实验中遇到的问题
(1)在运行graphviz时出现了failed to execute WindowsPath('dot'), make sure the Graphviz executables are on your systems' PATH报错
解决方案:因为graphviz是一个软件,所以在jupyter里面下载graphviz时出现不了决策树,所以此时需到官网里面去下载graphviz。然后修改环境变量,最后关闭Jupyter,cmd打开命令提示符,执行dot -version查看是否配置成功(如下图),如果成功,重启Jupyter。
标签:self,tree,feature,算法,train,实验,ent,data,决策树 From: https://www.cnblogs.com/wangpengfei201613312/p/16817515.html