特征降维
降维
PCA(Principal component analysis),主成分分析。特点是保存数据集中对方差影响最大的那些特征,PCA极其容易受到数据中特征范围影响,所以在运用PCA前一定要做特征标准化,这样才能保证每维度特征的重要性等同。
sklearn.decomposition.PCA
class PCA(sklearn.decomposition.base)
"""
主成成分分析
:param n_components: int, float, None or string
这个参数可以帮我们指定希望PCA降维后的特征维度数目。最常用的做法是直接指定降维到的维度数目,此时n_components是一个大于1的整数。
我们也可以用默认值,即不输入n_components,此时n_components=min(样本数,特征数)
:param whiten: bool, optional (default False)
判断是否进行白化。所谓白化,就是对降维后的数据的每个特征进行归一化。对于PCA降维本身来说一般不需要白化,如果你PCA降维后有后续的数据处理动作,可以考虑白化,默认值是False,即不进行白化
:param svd_solver:
选择一个合适的SVD算法来降维,一般来说,使用默认值就够了。
"""
通过一个例子来看
>>> import numpy as np
>>> from sklearn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
PCA(copy=True, iterated_power='auto', n_components=2, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
>>> print(pca.explained_variance_ratio_)
[ 0.99244... 0.00755...]
案例
# products.csv 商品信息
# order_products__prior.csv 订单与商品信息
# orders.csv 用户的订单信息
# aisles.csv 商品所属具体物品类别
In [ ]:
#导入模块
import pandas as pd
from sklearn.decomposition import PCA
In [ ]:
# 导入CSV
products = pd.read_csv("../data/products.csv")
products.head()
Out[ ]:
product_id | product_name | aisle_id | department_id | |
---|---|---|---|---|
0 | 1 | Chocolate Sandwich Cookies | 61 | 19 |
1 | 2 | All-Seasons Salt | 104 | 13 |
2 | 3 | Robust Golden Unsweetened Oolong Tea | 94 | 7 |
3 | 4 | Smart Ones Classic Favorites Mini Rigatoni Wit... | 38 | 1 |
4 | 5 | Green Chile Anytime Sauce | 5 | 13 |
In [ ]:
opp = pd.read_csv("../data/order_products__prior.csv")
opp.head()
Out[ ]:
order_id | product_id | add_to_cart_order | reordered | |
---|---|---|---|---|
0 | 2 | 33120 | 1 | 1 |
1 | 2 | 28985 | 2 | 1 |
2 | 2 | 9327 | 3 | 0 |
3 | 2 | 45918 | 4 | 1 |
4 | 2 | 30035 | 5 | 0 |
In [ ]:
orders = pd.read_csv("../data/orders.csv")
orders.head()
Out[ ]:
order_id | user_id | eval_set | order_number | order_dow | order_hour_of_day | days_since_prior_order | |
---|---|---|---|---|---|---|---|
0 | 2539329 | 1 | prior | 1 | 2 | 8 | NaN |
1 | 2398795 | 1 | prior | 2 | 3 | 7 | 15.0 |
2 | 473747 | 1 | prior | 3 | 3 | 12 | 21.0 |
3 | 2254736 | 1 | prior | 4 | 4 | 7 | 29.0 |
4 | 431534 | 1 | prior | 5 | 4 | 15 | 28.0 |
In [ ]:
aisles = pd.read_csv("../data/aisles.csv")
aisles.head()
Out[ ]:
aisle_id | aisle | |
---|---|---|
0 | 1 | prepared soups salads |
1 | 2 | specialty cheeses |
2 | 3 | energy granola bars |
3 | 4 | instant foods |
4 | 5 | marinades meat preparation |
In [ ]:
#合并表格
data = pd.merge(products, opp, on=["product_id", "product_id"])
data = pd.merge(data, orders, on=["order_id", "order_id"])
data = pd.merge(data, aisles, on=["aisle_id", "aisle_id"])
Out[ ]:
product_id | product_name | aisle_id | department_id | order_id | add_to_cart_order | reordered | user_id | eval_set | order_number | order_dow | order_hour_of_day | days_since_prior_order | aisle | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | Chocolate Sandwich Cookies | 61 | 19 | 1107 | 7 | 0 | 38259 | prior | 2 | 1 | 11 | 7.0 | cookies cakes |
1 | 1 | Chocolate Sandwich Cookies | 61 | 19 | 5319 | 3 | 1 | 196224 | prior | 65 | 1 | 14 | 1.0 | cookies cakes |
2 | 1 | Chocolate Sandwich Cookies | 61 | 19 | 7540 | 4 | 1 | 138499 | prior | 8 | 0 | 14 | 7.0 | cookies cakes |
3 | 1 | Chocolate Sandwich Cookies | 61 | 19 | 9228 | 2 | 0 | 79603 | prior | 2 | 2 | 10 | 30.0 | cookies cakes |
4 | 1 | Chocolate Sandwich Cookies | 61 | 19 | 9273 | 30 | 0 | 50005 | prior | 1 | 1 | 15 | NaN | cookies cakes |
In [ ]:
#交叉表
corss = pd.crosstab(data["user_id"], data["aisle"])
corss
Out[ ]:
aisle | air fresheners candles | asian foods | baby accessories | baby bath body care | baby food formula | bakery desserts | baking ingredients | baking supplies decor | beauty | beers coolers | ... | spreads | tea | tofu meat alternatives | tortillas flat bread | trail mix snack mix | trash bags liners | vitamins supplements | water seltzer sparkling water | white wines | yogurt |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
user_id | |||||||||||||||||||||
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2 | 0 | 3 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | ... | 3 | 1 | 1 | 0 | 0 | 0 | 0 | 2 | 0 | 42 |
3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 |
4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 |
5 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
206205 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
206206 | 0 | 4 | 0 | 0 | 0 | 0 | 4 | 1 | 0 | 0 | ... | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 |
206207 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | ... | 3 | 4 | 0 | 2 | 1 | 0 | 0 | 11 | 0 | 15 |
206208 | 0 | 3 | 0 | 0 | 3 | 0 | 4 | 0 | 0 | 0 | ... | 5 | 0 | 0 | 7 | 0 | 0 | 0 | 0 | 0 | 33 |
206209 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
206209 rows × 134 columns
In [ ]:
# 特征降维
pca = PCA(n_components=0.9)# 保留90%的数据
result = pca.fit_transform(corss)
result.shape
Out[ ]:
(206209, 27)
特征选择
数据的特征选择
降维本质上是从一个维度空间映射到另一个维度空间,特征的多少别没有减少,当然在映射的过程中特征值也会相应的变化。举个例子,现在的特征是1000维,我们想要把它降到500维。降维的过程就是找个一个从1000维映射到500维的映射关系。原始数据中的1000个特征,每一个都对应着降维后的500维空间中的一个值。假设原始特征中有个特征的值是9,那么降维后对应的值可能是3。而对于特征选择来说,有很多方法:
- Filter(过滤式):VarianceThreshold
- Embedded(嵌入式):正则化、决策树
- Wrapper(包裹式)
其中过滤式的特征选择后,数据本身不变,而数据的维度减少。而嵌入式的特征选择方法也会改变数据的值,维度也改变。Embedded方式是一种自动学习的特征选择方法,后面讲到具体的方法的时候就能理解了。
特征选择主要有两个功能:
(1)减少特征数量,降维,使模型泛化能力更强,减少过拟合
(2)增强特征和特征值之间的理解
sklearn.feature_selection
去掉取值变化小的特征(删除低方差特征)
VarianceThreshold 是特征选择中的一项基本方法。它会移除所有方差不满足阈值的特征。默认设置下,它将移除所有方差为0的特征,即那些在所有样本中数值完全相同的特征。
假设我们要移除那些超过80%的数据都为1或0的特征
from sklearn.feature_selection import VarianceThreshold
X = [[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1]]
sel = VarianceThreshold(threshold=(0.8 * (1 - 0.8)))
sel.fit_transform(X)
array([[0, 1],
[1, 0],
[0, 0],
[1, 1],
[1, 0],
[1, 1]])
过滤式 Filter
In [ ]:
# 导入VarianceThreshold包
from sklearn.feature_selection import VarianceThreshold
# 测试数据
data = [[0, 2, 0, 3],
[0, 1, 4, 3],
[0, 1, 1, 3]]
# 实例化 param:threshold 小于这个方差值的特征会被过滤掉
vt= VarianceThreshold(threshold=0.0)
# 特征过滤
result = vt.fit_transform(data)
result
Out[ ]:
array([[2, 0],
[1, 4],
[1, 1]])
In [ ]:
#验证
import numpy as np
a = np.array(data)
np.var(a, axis=0)
Out[ ]:
array([0. , 0.22222222, 2.88888889, 0. ])
标签:...,机器,03DAY,order,学习,prior,PCA,data,id
From: https://www.cnblogs.com/thankcat/p/17249528.html