Exploratory Data Analysis.
发掘数据特征真的是一门学问.
I 通用
通用步骤后基本可以完成 EDA, 看出各个特征的分布情况.
1. import 模式
import os
import numpy as np
import pandas as pd; pd.set_option('display.max_columns', 30)
import seaborn as sns
import matplotlib.pyplot as plt
import plotly.express as px
import warnings; warnings.filterwarnings("ignore")
from sklearnex import patch_sklearn; patch_sklearn()
2. 查看数据
train.info()
train.head()
train.describe().T
3. 增加和删除列
train['Status'] = target
train.drop(['id'],axis=1,inplace=True)
像 id 这种无价值的特征列可以直接 drop.
使用 drop 时指定 inplace=True, 会应用在原始的数据帧而不是副本.
4. 缺失值和重复值
train.isna().sum().sort_values(ascending=False) / train.shape[0] * 100
train.drop_duplicates(inplace=True)
并不是所有情况都去除重复值.
5. 密度函数图
numeric_columns = train.select_dtypes(include=['float', 'int']).columns
sns.set(style="whitegrid")
num_plots = len(numeric_columns)
rows = (num_plots + 1) // 2
cols = 2
_, axes = plt.subplots(nrows=rows, ncols=cols, figsize=(8 * cols, 6 * rows))
for i, feature_name in enumerate(numeric_columns):
row_idx, col_idx = divmod(i, cols)
sns.histplot(data=train, x=feature_name, kde=True, ax=axes[row_idx, col_idx])
axes[row_idx, col_idx].set_title(f'Density Plot of {feature_name}')
axes[row_idx, col_idx].set_xlabel('Feature Value')
axes[row_idx, col_idx].set_ylabel('Density')
plt.tight_layout()
plt.show()
6. 偏斜度与峰度
pd.DataFrame({'train': train.skew(), 'test': test.skew()})
pd.DataFrame({'train': train.kurtosis(), 'test': test.kurtosis()})
Skew 是偏斜度, 右偏 (正值) 意味着平均值右侧长而厚, 左偏 (负值) 意味着平均值左侧长而厚; Kurtosis 是峰度, 大于 3 意味着尖峰分布, 小于 3 意味着平峰分布.
对于偏斜度明显的情况, 可采用 box-cox 变换, λ = 0 即为对数变换, 常用于右偏情形, λ = 2 即为平方变换, 常用于左偏情形.
7. 热图
corr = train.corr(method='spearman')
plt.figure(figsize=(12, 10))
sns.heatmap(corr, linewidth=0.5, annot=True, cmap="RdBu", vmin=-1, vmax=1)
Heatmap 通常是观察输入变量之间的共线性, 删除共线性高的其中之一, 以及剔除输入变量和输出变量相关性较小的列.
8. 箱线图
df_melted = pd.melt(train.select_dtypes(include=['float', 'int']))
custom_colors = px.colors.qualitative.Plotly
fig = px.box(df_melted, x='variable', y='value', color='variable', color_discrete_sequence=custom_colors)
fig.update_layout(title='Box Plots')
fig.show()
这可以看出离群点的数量.
9. 层次聚类图
from scipy.cluster.hierarchy import linkage, dendrogram
from scipy.spatial.distance import squareform
corr = train.corr(method = "spearman")
link = linkage(squareform(1 - abs(corr)), "complete")
plt.figure(figsize = (8, 8), dpi = 400)
dendro = dendrogram(link, orientation='right', labels=train.columns)
plt.show()
这可以看出聚类关系. 如果两个类极为相似, 常常删去其中一个.
II 数据清洗
数据清洗的是异常值和缺失值等.
1. IQR
这通常是最开始清洗离群点的步骤.
threshold = 6
Q1 = train.quantile(0.25)
Q3 = train.quantile(0.75)
IQR = Q3 - Q1
lower_bound = Q1 - threshold * IQR
upper_bound = Q3 + threshold * IQR
train = train[((train >= lower_bound) & (train <= upper_bound)).all(axis=1)]
III 特征变换
显著地改变数据分布.
1. 离散化
对于一个连续的预测变量, 根据分布定义一个新的分类特征. 这种操作也被称为分箱.
比如一个 1 - 10 的值, 可以划分成 [1.25,2.25,3.15,4,5.2,5.75,6.25,7,8.1,9.2].
y=train_df['Hardness'].values
unique_target=np.array([1.25,2.25,3.15,4,5.2,5.75,6.25,7,8.1,9.2])
y_label=[]
for i in range(len(y)):
min_dis=1
best_label=0
for j in range(len(unique_target)):
dis=abs(y[i]-unique_target[j])
if dis<min_dis:
min_dis=dis
best_label=j
y_label.append(best_label)
train_df['Hardness_label']=y_label
train_df.head()
这种操作对于预测为若干个精确固定值的情形极为适用.
2. Box-Cox 变换
处理偏峰分布的问题.
from scipy.stats import boxcox
train_ = train.drop(columns=['Hardness'])
train_ += 1e-10
test += 1e-10
for col in train_.columns:
train[col], lambda_ = boxcox(train_[col])
test[col] = boxcox(test[col], lambda_)
3. Standard 变换
标准化为均值为 0, 标准差为 1 的形式. 这一般用于变量范围过大的情形.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
train[['allelectrons_Total', 'density_Total']] = scaler.fit_transform(train[['allelectrons_Total', 'density_Total']])
test[['allelectrons_Total', 'density_Total']] = scaler.fit_transform(test[['allelectrons_Total', 'density_Total']])
4. Minmax 变换
也称归一化. Minmax 和 Standard 选择其一即可.
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
train_ = train.drop(columns=['Hardness'])
train[train_.columns] = scaler.fit_transform(train[train_.columns])
del(train_)
test[test.columns] = scaler.fit_transform(test[test.columns])
IV 编码
编码应用于分类特征.
1. 目标编码
基于目标变量的统计信息对训练集的分类特征进行编码. 这相当于原本是字符串的列之间不能运算, 这样处理后列之间相当于变成可以量化的数值了.
回归时, 常用目标变量的均值, 标准差等进行目标编码; 分类时, 常用目标变量的不同取值在某个类别特征的出现频率进行编码.
train_df=total_df[:len(train_df)]
keys=train_df.keys().values
TARGET_NAME = 'Hardness'
cat_keys=[key for key in keys if ((TARGET_NAME not in key) and (train_df[key].nunique()>2) and (train_df[key].nunique()<=30))]
print(f"cat_keys:{cat_keys}")
for key in cat_keys:
values=np.unique(train_df[key].values)
print(f"key:{key},values:{values}")
total_df[key+'_target_mean'] = total_df.groupby([key])[TARGET_NAME].transform('mean')
total_df[key+'_target_std'] = total_df.groupby([key])[TARGET_NAME].transform('std')
total_df[key+'_target_skew'] = total_df.groupby([key])[TARGET_NAME].transform('skew')
key_target=train_df[TARGET_NAME].groupby([train_df[key]]).count()
keys=key_target.keys().values
target=key_target.values
key_target=pd.DataFrame({key:keys,key+f"_{TARGET_NAME}_count":target})
total_df=pd.merge(total_df,key_target,on=key,how="left")
del key_target
这样每一个分类特征的每一个取值都与一个目标变量的统计信息形成映射. 接下来把这种映射关系应用在测试集上进行预测.
2. 独热编码
独热编码用 0 或 1 表达分类, 值是无意义的, 定性的, 不可排序.
transformer_cat = make_pipeline(
SimpleImputer(strategy="constant", fill_value="NA"),
OneHotEncoder(handle_unknown='ignore')
)
3. 标签编码
将每个类别标签分配一个唯一的整数, 适用于顺序类别特征. 标签编码得到的值是有意义的, 定量的, 可排序的.
4. 频率编码
用每个类别的出现频率来替换类别, 对于处理分类特征的许多唯一值很有用.
V 超参数优化
通过拟合很多模型选出最佳超参数组合的过程.
1. optuna
#import optuna
from lightgbm import LGBMClassifier
def accuracy(y_true,y_pred):
return np.sum(y_true==y_pred)/len(y_true)
#Trial 11 finished with value: 0.6119673617407072 and parameters:
lgbm_params={'random_state': 1819, 'n_estimators': 309, 'reg_alpha': 0.009043959900513852, 'reg_lambda': 6.932606602460183, 'colsample_bytree': 0.6183243994985523, 'subsample': 0.6595851034943229, 'learning_rate': 0.016870023802940223, 'num_leaves': 50, 'min_child_samples': 27}
# def objective(trial):
# param = {
# 'random_state': trial.suggest_int('random_state',42,2023),
# 'n_estimators': trial.suggest_int('n_estimators', 50, 500),
# 'reg_alpha': trial.suggest_loguniform('reg_alpha', 1e-3, 10.0),
# 'reg_lambda': trial.suggest_loguniform('reg_lambda', 1e-3, 10.0),
# 'colsample_bytree': trial.suggest_float('colsample_bytree', 0.5, 1),
# 'subsample': trial.suggest_float('subsample', 0.5, 1),
# 'learning_rate': trial.suggest_float('learning_rate', 1e-4, 0.25, log=True),
# 'num_leaves' : trial.suggest_int('num_leaves', 8, 64),
# 'min_child_samples': trial.suggest_int('min_child_samples', 1, 100),
# }
# model = LGBMClassifier(**param)
# model.fit(train_X,train_y[:,1],eval_set=[(valid_X,valid_y[:,1])],early_stopping_rounds=100,verbose=False)
# preds = model.predict(valid_X)
# acc = accuracy(valid_y[:,1], preds)
# return acc
#
# study = optuna.create_study(direction='maximize', study_name='Optimize boosting hyperparameters')
#
# study.optimize(objective, n_trials=100)
# lgbm_params=study.best_trial.params
#
print('lgbm_params=',lgbm_params)
标签:EDA,工程,特征,trial,train,test,import,col,columns
From: https://www.cnblogs.com/Arcticus/p/17876522.html