首页 > 编程语言 >python学习笔记37-pandas

python学习笔记37-pandas

时间:2024-02-06 18:22:18浏览次数:32  
标签:... name python 37 NaN df NUM pd pandas

内容来自https://www.runoob.com/pandas/pandas-tutorial.html

目录

1. Series

1.1 Series特点

  1. 索引: 每个Series都有一个索引, 它可以是整数, 字符串, 日期等类型.
    如果没有显式指定索引, Pandas会自动创建一个默认的整数索引.
  2. 数据类型: Series可以容纳不同数据类型的元素, 包括整数, 浮点数, 字符串等.

创建方法:
pandas.Series(data, index, dtype, name, copy)

##Series
#>>> import pandas as pd
#>>> a = [1,2,3]
#>>> myvar = pd.Series(a) #通过list创建Series
#>>> print(myvar)    #打印series
#0    1
#1    2
#2    3
#dtype: int64
#>>> 
#>>> print(myvar[1]) #利用索引读取数据
#2

Series->指定索引值

>>> import pandas as pd
>>> a = ['Google', 'Runnob', 'Wiki',]
>>> myvar = pd.Series(a, index=['x', 'y', 'z'])
>>> print(myvar)
x    Google
y    Runnob
z      Wiki
dtype: object
>>> print(myvar['y']) #根据索引值读取数据
Runnob

Series->通过dict创建Series

>>> sites = {1:"Google", 2:"Runoob", 3:"Wiki"}
>>> myvar = pd.Series(sites)
>>> print(myvar)
1    Google
2    Runoob
3      Wiki
dtype: object

# index指定需要的数据, 其余index的数据丢掉:
>>> myvar = pd.Series(sites, index=[1,2])
>>> print(myvar)
1    Google
2    Runoob
dtype: object

1.2 更多Series说明

1.2.1 基本操作

>>> a = ['Google', 'Runnob', 'Wiki',]
>>> b = ['x', 'y', 'z']
>>> 
>>> myvar = pd.Series(a, index=b)
>>> 
>>> # 在自定义索引后, 数字索引仍然有效
>>> myvar[1]   # 数字索引
'Runnob'
>>> myvar['y'] # 自定义索引
'Runnob'
>>> 
>>> # 获取多个值, 超出范围的索引忽略
>>> myvar[1:4]
y    Runnob
z      Wiki
dtype: object
>>> 
>>> 遍历Series的key和value
>>> for k, v in myvar.items():
...     print(f'{k} -> {v}')
...
x -> Google
y -> Runnob
z -> Wiki
>>>

1.2.2 基本运算

#算术运算
>>> myvar = pd.Series([9, 5, 8])
>>> myvar2 = myvar * 2 # 所有元素乘以2
>>> myvar # 原series不变
0    9
1    5
2    8
dtype: int64
>>> myvar2 # 返回的series为乘以2的Series.
0    18
1    10
2    16
dtype: int64

#过滤
>>> myvar3 = myvar[myvar>6] #只取大于6的元素
>>> myvar3 # 注意原本的idx会保留, 不会重新排.
0    9
2    8
dtype: int64


#数据函数
>>> import numpy as np
>>> myvar4 = np.sqrt(myvar) # 每个元素取平方根, myvar不变, 返回新series.
>>> myvar4 # 注意, 数据类型自动转换为float64
0    3.000000
1    2.236068
2    2.828427
dtype: float64

1.2.3 属性和方法

#获取索引
>>> myvar = pd.Series(['Google', 'Runnob', 'Wiki',])
>>> idx = myvar.index
>>> idx #默认索引返回的是RangeIndex
RangeIndex(start=0, stop=3, step=1)
>>>
>>> myvar = pd.Series(['Google', 'Runnob', 'Wiki',], index=['x', 'y', 'z'])
>>> idx = myvar.index
>>> idx #自定义索引返回的是Index
Index(['x', 'y', 'z'], dtype='object')


>>> myvar.values # 获取值数组
array(['Google', 'Runnob', 'Wiki'], dtype=object)

2. DataFrame

2.1 DataFrame特点

是二维表格.

构造方法:
pandas.DataFrame(data, index, columns, dtype, copy)

参数:
data: 一组数据(ndarray, series, map, list, dict等类型).
index: 索引值, 也可以称为行标签.
columns: 列标签, 默认为RangeIndex(0, 1, 2, ..., n).
dtype: 数据类型.
copy: 拷贝数据, 默认为False.

使用列表创建:

>>> import pandas as pd
>>>
>>> data = [['Google', 10], ['Runoob', 12], ['Wiki', 13]]
>>> df = pd.DataFrame(data, columns=['Site', 'Age'])
>>>
>>> print(df)
     Site  Age
0  Google   10
1  Runoob   12
2    Wiki   13
>>>
>>> # 设置每列的数据类型
>>> df['Site'] = df['Site'].astype(str)
>>> df['Age'] = df['Age'].astype(float)
>>> df #可以看到Age列的数据类型变了.
     Site   Age
0  Google  10.0
1  Runoob  12.0
2    Wiki  13.0

使用ndarrays创建:

>>> # 第一维是dict, 第二维是list.
>>> # 第一维的dict的key做列索引, RangeIndex作为行索引.
>>> data = {'Site':['Google', 'Runoob', 'Wiki'], 'Age':[10, 12, 13]}
>>> df = pd.DataFrame(data)
>>> df
     Site  Age
0  Google   10
1  Runoob   12
2    Wiki   13
>>>

使用dict创建:

>>> # 第一维是list, 第二维是dict.
>>> # 第二维的key做列索引, 
>>> # 第二维的value, 每个dict中的值做一行数据, 如果缺少对应的key, 则value为NaN.
>>> data = [{'a': 1, 'b': 2},{'a': 5, 'b': 10, 'c': 20}]
>>> df = pd.DataFrame(data)
>>> df
   a   b     c
0  1   2   NaN
1  5  10  20.0
>>>

使用loc函数返回指定行的数据, 注意loc后是中括号, 不是小括号.

>>> import pandas as pd
>>> 
>>> data = {
...   "calories": [420, 380, 390],
...   "duration": [50, 40, 45],
... }
>>> 
>>> df = pd.DataFrame(data)
>>> 
>>> print(df.loc[0]) #第0行数据, 是Series类型
calories    420
duration     50
Name: 0, dtype: int64
>>> print(df.loc[1]) #第1行数据, 是Series类型
calories    380
duration     40
Name: 1, dtype: int64
>>>
>>> print(df.loc[[0, 1]]) # 使用[[...]] 返回多行数据, 是DataFrame类型
   calories  duration
0       420        50
1       380        40
>>>

指定索引值(行索引), 使用loc[行索引]获取对应行数据:

>>> import pandas as pd
>>>
>>> data = {
...   "calories": [420, 380, 390],
...   "duration": [50, 40, 45]
... }
>>>
>>> df = pd.DataFrame(data, index = ["day1", "day2", "day3"])
>>>
>>> print(df)
      calories  duration
day1       420        50
day2       380        40
day3       390        45
>>>
>>> print(df.loc["day2"])
calories    380
duration     40
Name: day2, dtype: int64
>>>

2.2 更多DataFrame说明

2.2.1 基本操作

#获取列
name_column = df['Name']

#获取行
first_row = df.loc[0]

#选择多列
subset = df[['Name', 'Age']]

#过滤行
filtered_rows = df[df['Age']>30]

2.2.2 属性和方法

>>> df
      calories  duration
day1       420        50
day2       380        40
day3       390        45
>>> 
>>> #获取列名
>>> columns = df.columns
>>> columns
Index(['calories', 'duration'], dtype='object')
>>> 
>>> #获取形状(行数和列数)
>>> shape = df.shape
>>> shape
(3, 2)
>>> 
>>> #获取索引
>>> index = df.index
>>> index
Index(['day1', 'day2', 'day3'], dtype='object')
>>> 
>>> #获取描述统计信息
>>> stats = df.describe()
>>> stats
         calories  duration
count    3.000000       3.0
mean   396.666667      45.0
std     20.816660       5.0
min    380.000000      40.0
25%    385.000000      42.5
50%    390.000000      45.0
75%    405.000000      47.5
max    420.000000      50.0
>>> 

2.2.3 数据操作

>>> df
      calories  duration
day1       420        50
day2       380        40
day3       390        45
>>> 
>>> # 添加新列
>>> df['Salary'] = [50000, 60000, 70000]
>>> df
      calories  duration  Salary
day1       420        50   50000
day2       380        40   60000
day3       390        45   70000
>>> 
>>> # 删除列
>>> df.drop('duration', axis=1, inplace=True)
>>> df
      calories  Salary
day1       420   50000
day2       380   60000
day3       390   70000
>>> 
>>> # 排序
>>> df.sort_values(by='calories', ascending=False, inplace=True)
>>> df
      calories  Salary
day1       420   50000
day3       390   70000
day2       380   60000
>>> 
>>> # 列重命名, inplace是替换原有df, 否则会返回修改后的df, 原df不改变.
>>> df.rename(columns={'Salary': '工资'}, inplace=True)
>>> df
      calories     工资
day1       420  50000
day3       390  70000
day2       380  60000
>>> 
>>> 

2.2.4 从外部数据源创建DataFrame

df_csv   = pd.read_csv('example.csv')    # 从csv文件创建DataFrame
df_excel = pd.read_excel('example.xlsx') # 从excel文件创建DataFrame

3. Pandas CSV文件

3.1 基本使用

>>> df = pd.read_csv('nba.csv')
>>> df # 或print(df), 文件太长时会打印不全.
              Name            Team  Number  ... Weight            College     Salary
0    Avery Bradley  Boston Celtics     0.0  ...  180.0              Texas  7730337.0
1      Jae Crowder  Boston Celtics    99.0  ...  235.0          Marquette  6796117.0
2     John Holland  Boston Celtics    30.0  ...  205.0  Boston University        NaN
3      R.J. Hunter  Boston Celtics    28.0  ...  185.0      Georgia State  1148640.0
4    Jonas Jerebko  Boston Celtics     8.0  ...  231.0                NaN  5000000.0
..             ...             ...     ...  ...    ...                ...        ...
453   Shelvin Mack       Utah Jazz     8.0  ...  203.0             Butler  2433333.0
454      Raul Neto       Utah Jazz    25.0  ...  179.0                NaN   900000.0
455   Tibor Pleiss       Utah Jazz    21.0  ...  256.0                NaN  2900000.0
456    Jeff Withey       Utah Jazz    24.0  ...  231.0             Kansas   947276.0
457            NaN             NaN     NaN  ...    NaN                NaN        NaN

[458 rows x 9 columns]
>>> # to_string()函数把df内容转为字符串, 就可以打印全了.
>>> print(df.to_string())
                         Name                    Team  Number Position   Age Height  Weight                College      Salary
0               Avery Bradley          Boston Celtics     0.0       PG  25.0    6-2   180.0                  Texas   7730337.0
1                 Jae Crowder          Boston Celtics    99.0       SF  25.0    6-6   235.0              Marquette   6796117.0
2                John Holland          Boston Celtics    30.0       SG  27.0    6-5   205.0      Boston University         NaN
3                 R.J. Hunter          Boston Celtics    28.0       SG  22.0    6-5   185.0          Georgia State   1148640.0
#手工省略后续文本, 太长了.
>>>

使用to_csv()方法, 把DataFrame存储为csv文件

>>> import pandas as pd
>>>
>>> # 三个字段 name, site, age
>>> nme = ["Google", "Runoob", "Taobao", "Wiki"]
>>> st = ["www.google.com", "www.runoob.com", "www.taobao.com", "www.wikipedia.org"]
>>> ag = [90, 40, 80, 98]
>>>
>>> # 字典
>>> dict = {'name': nme, 'site': st, 'age': ag}
>>>
>>> df = pd.DataFrame(dict)
>>>
>>> df.to_csv('site.csv')
>>>

site.csv文件内容如下:

,name,site,age
0,Google,www.google.com,90
1,Runoob,www.runoob.com,40
2,Taobao,www.taobao.com,80
3,Wiki,www.wikipedia.org,98

3.2 数据处理

head(n)
tail(n)
info()

>>> df = pd.read_csv('nba.csv')
>>> 
>>> # head(n), 读取前面的n行, 不填参数n, 默认返回5行(数据少于5行时返回实际行)
>>> df.head()
            Name            Team  Number  ... Weight            College     Salary
0  Avery Bradley  Boston Celtics     0.0  ...  180.0              Texas  7730337.0
1    Jae Crowder  Boston Celtics    99.0  ...  235.0          Marquette  6796117.0
2   John Holland  Boston Celtics    30.0  ...  205.0  Boston University        NaN
3    R.J. Hunter  Boston Celtics    28.0  ...  185.0      Georgia State  1148640.0
4  Jonas Jerebko  Boston Celtics     8.0  ...  231.0                NaN  5000000.0

[5 rows x 9 columns]
>>>
>>> # df.tail(n) #读取尾部的n行, 默认是5行.
>>> 
>>> # info(), 返回表格的一些基本信息
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 458 entries, 0 to 457
Data columns (total 9 columns):
 #   Column    Non-Null Count  Dtype
---  ------    --------------  -----
 0   Name      457 non-null    object
 1   Team      457 non-null    object
 2   Number    457 non-null    float64
 3   Position  457 non-null    object
 4   Age       457 non-null    float64
 5   Height    457 non-null    object
 6   Weight    457 non-null    float64
 7   College   373 non-null    object
 8   Salary    446 non-null    float64
dtypes: float64(4), object(5)
memory usage: 32.3+ KB
>>> 
>>> 

4. Pandas JSON

4.1 基本使用

>>> import pandas as pd
>>>
>>> df = pd.read_json('sites.json')
>>> df
     id    name             url  likes
0  A001    菜鸟教程  www.runoob.com     61
1  A002  Google  www.google.com    124
2  A003      淘宝  www.taobao.com     45
>>>

4.2 内嵌JSON数据

json_normalize()

nested_list.json文件, 它一级key的value的维数不统一, school_name对应的是str, students对应的是dict.

{
    "school_name": "ABC primary school",
    "class": "Year 1",
    "students": [
    {
        "id": "A001",
        "name": "Tom",
        "math": 60,
        "physics": 66,
        "chemistry": 61
    },
    {
        "id": "A002",
        "name": "James",
        "math": 89,
        "physics": 76,
        "chemistry": 51
    },
    {
        "id": "A003",
        "name": "Jenny",
        "math": 79,
        "physics": 90,
        "chemistry": 78
    }]
}
>>> import pandas as pd
>>> import json
>>> 
>>> #直接解析维数不同的json, students对应的value没解析完全, 还有嵌套数据.
>>> df = pd.read_json('nested_list.json')
>>> df
          school_name   class                                           students
0  ABC primary school  Year 1  {'id': 'A001', 'name': 'Tom', 'math': 60, 'phy...
1  ABC primary school  Year 1  {'id': 'A002', 'name': 'James', 'math': 89, 'p...
2  ABC primary school  Year 1  {'id': 'A003', 'name': 'Jenny', 'math': 79, 'p...
>>>
>>> #单独解析student对应的数据
>>> with open('nested_list.json','r') as f:
...   data = json.loads(f.read())
...
>>> df_nested_list = pd.json_normalize(
...     data,
...     record_path =['students'],    #展开内嵌的student数据
...     meta=['school_name', 'class'] #school_name和class也展示出来.(公共数据?)
... )
>>> df_nested_list
     id   name  math  physics  chemistry         school_name   class
0  A001    Tom    60       66         61  ABC primary school  Year 1
1  A002  James    89       76         51  ABC primary school  Year 1
2  A003  Jenny    79       90         78  ABC primary school  Year 1
>>>

更复杂json数据的解析
nested_mix.json

{
    "school_name": "local primary school",
    "class": "Year 1",
    "info": {
      "president": "John Kasich",
      "address": "ABC road, London, UK",
      "contacts": {
        "email": "[email protected]",
        "tel": "123456789"
      }
    },
    "students": [
    {
        "id": "A001",
        "name": "Tom",
        "math": 60,
        "physics": 66,
        "chemistry": 61
    },
    {
        "id": "A002",
        "name": "James",
        "math": 89,
        "physics": 76,
        "chemistry": 51
    },
    {
        "id": "A003",
        "name": "Jenny",
        "math": 79,
        "physics": 90,
        "chemistry": 78
    }]
}

解析方法:

>>> import pandas as pd
>>> import json
>>> 
>>> with open('nested_mix.json', 'r') as fid:
...     data = json.loads(fid.read())
...
>>> df = pd.json_normalize(
...     data,
...     record_path = ['students'],
...     meta = ['class', ['info', 'president'], ['info', 'contacts', 'email']],
... )
>>> df
     id   name  math  physics  chemistry   class info.president info.contacts.email
0  A001    Tom    60       66         61  Year 1    John Kasich         [email protected]
1  A002  James    89       76         51  Year 1    John Kasich         [email protected]
2  A003  Jenny    79       90         78  Year 1    John Kasich         [email protected]
>>>

4.3 读取内嵌数据中的一组数据

nested_deep.json

{
    "school_name": "local primary school",
    "class": "Year 1",
    "students": [
    {
        "id": "A001",
        "name": "Tom",
        "grade": {
            "math": 60,
            "physics": 66,
            "chemistry": 61
        }
 
    },
    {
        "id": "A002",
        "name": "James",
        "grade": {
            "math": 89,
            "physics": 76,
            "chemistry": 51
        }
       
    },
    {
        "id": "A003",
        "name": "Jenny",
        "grade": {
            "math": 79,
            "physics": 90,
            "chemistry": 78
        }
    }]
}

需要使用glom模块处理数据嵌套,
先案安装这个模块: pip3 install glom

>>> import pandas as pd
>>> from glom import glom
>>>
>>> df = pd.read_json('nested_deep.json')
>>> 
>>> # 只读取内嵌中的math字段
>>> data = df['students'].apply(lambda row: glom(row, 'grade.math'))
>>> data
0    60
1    89
2    79
Name: students, dtype: int64
>>>

5. Pandas数据清洗

对没用的数据进行处理的过程:

5.1 清洗空值

DataFrame.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)

参数:
axis: 0, 默认值, 逢空值剔除整行,
1, 逢空值剔除整列.
how: any, 默认值, 行(或列)中任何一个数据出现NA就去掉整行(或列).
all, 一行(或列)中所有值都是NA, 才会去掉这行(或列).
thresh: 设置需要多少非空值才可以保留下来. ???
subset: 设置想要检查的列, 如果是多个列, 可以使用列名的list作为参数.
inplace: False, 默认值, 原df不变, 返回修改后的df.
True, 计算得到的值直接覆盖之前的值, 源数据会被修改, 并返回None.

判断一列中各个单元格是否为空

>>> import pandas as pd
>>> df = pd.read_csv('property-data.csv')
>>> df
           PID  ST_NUM     ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0   104.0      PUTNAM            Y            3        1  1000
1  100002000.0   197.0   LEXINGTON            N            3      1.5    --
2  100003000.0     NaN   LEXINGTON            N          NaN        1   850
3  100004000.0   201.0    BERKELEY           12            1      NaN   700
4          NaN   203.0    BERKELEY            Y            3        2  1600
5  100006000.0   207.0    BERKELEY            Y          NaN        1   800
6  100007000.0     NaN  WASHINGTON          NaN            2   HURLEY   950
7  100008000.0   213.0     TREMONT            Y            1        1   NaN
8  100009000.0   215.0     TREMONT            Y           na        2  1800
>>> df['NUM_BEDROOMS']
0      3
1      3
2    NaN
3      1
4      3
5    NaN
6      2
7      1
8     na  # <-- 结合下面isnull()函数,可以看出na没有被认为是空值.
Name: NUM_BEDROOMS, dtype: object
>>> df['NUM_BEDROOMS'].isnull()
0    False
1    False
2     True
3    False
4    False
5     True
6    False
7    False
8    False
Name: NUM_BEDROOMS, dtype: bool
>>>

可以指定空数据类型

>>> import pandas as pd
>>>
>>>
>>> missing_values = ["n/a", "na", "--"]
>>> # 使用na_valuesr指定空数据类型, 
>>> df = pd.read_csv('property-data.csv', na_values = missing_values)
>>> # 可以看出原本的n/a, na, NA, 都被转成了NaN.
>>> df
           PID  ST_NUM     ST_NAME OWN_OCCUPIED  NUM_BEDROOMS NUM_BATH   SQ_FT
0  100001000.0   104.0      PUTNAM            Y           3.0        1  1000.0
1  100002000.0   197.0   LEXINGTON            N           3.0      1.5     NaN
2  100003000.0     NaN   LEXINGTON            N           NaN        1   850.0
3  100004000.0   201.0    BERKELEY           12           1.0      NaN   700.0
4          NaN   203.0    BERKELEY            Y           3.0        2  1600.0
5  100006000.0   207.0    BERKELEY            Y           NaN        1   800.0
6  100007000.0     NaN  WASHINGTON          NaN           2.0   HURLEY   950.0
7  100008000.0   213.0     TREMONT            Y           1.0        1     NaN
8  100009000.0   215.0     TREMONT            Y           NaN        2  1800.0
>>> df['NUM_BEDROOMS']
0    3.0
1    3.0
2    NaN
3    1.0
4    3.0
5    NaN
6    2.0
7    1.0
8    NaN
Name: NUM_BEDROOMS, dtype: float64
>>> df['NUM_BEDROOMS'].isnull()
0    False
1    False
2     True
3    False
4    False
5     True
6    False
7    False
8     True
Name: NUM_BEDROOMS, dtype: bool
>>>

标记好空值后, 下面来清洗空值:

>>> import pandas as pd
>>>
>>> df = pd.read_csv('property-data.csv')
>>>
>>> df
           PID  ST_NUM     ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0   104.0      PUTNAM            Y            3        1  1000
1  100002000.0   197.0   LEXINGTON            N            3      1.5    --
2  100003000.0     NaN   LEXINGTON            N          NaN        1   850
3  100004000.0   201.0    BERKELEY           12            1      NaN   700
4          NaN   203.0    BERKELEY            Y            3        2  1600
5  100006000.0   207.0    BERKELEY            Y          NaN        1   800
6  100007000.0     NaN  WASHINGTON          NaN            2   HURLEY   950
7  100008000.0   213.0     TREMONT            Y            1        1   NaN
8  100009000.0   215.0     TREMONT            Y           na        2  1800
>>>
>>> new_df = df.dropna()
>>> new_df # 清洗后, 只有idx为0,1,8的数据保留下来.
           PID  ST_NUM    ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0   104.0     PUTNAM            Y            3        1  1000
1  100002000.0   197.0  LEXINGTON            N            3      1.5    --
8  100009000.0   215.0    TREMONT            Y           na        2  1800
>>>

上面的操作中, 只要一行中任意一列为空, 该行就会被移除.
使用subset参数, 可以指定列, 这些列为空时, 移除该行, 其它列为空时, 不移除.

>>> import pandas as pd
>>> df = pd.read_csv('property-data.csv')
>>> df
           PID  ST_NUM     ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0   104.0      PUTNAM            Y            3        1  1000
1  100002000.0   197.0   LEXINGTON            N            3      1.5    --
2  100003000.0     NaN   LEXINGTON            N          NaN        1   850
3  100004000.0   201.0    BERKELEY           12            1      NaN   700
4          NaN   203.0    BERKELEY            Y            3        2  1600
5  100006000.0   207.0    BERKELEY            Y          NaN        1   800
6  100007000.0     NaN  WASHINGTON          NaN            2   HURLEY   950
7  100008000.0   213.0     TREMONT            Y            1        1   NaN
8  100009000.0   215.0     TREMONT            Y           na        2  1800
>>> df.dropna(subset=['ST_NUM'], inplace = True)
>>> df
           PID  ST_NUM    ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0   104.0     PUTNAM            Y            3        1  1000
1  100002000.0   197.0  LEXINGTON            N            3      1.5    --
3  100004000.0   201.0   BERKELEY           12            1      NaN   700
4          NaN   203.0   BERKELEY            Y            3        2  1600
5  100006000.0   207.0   BERKELEY            Y          NaN        1   800
7  100008000.0   213.0    TREMONT            Y            1        1   NaN
8  100009000.0   215.0    TREMONT            Y           na        2  1800
>>>

使用fillna()方法来替换空字段:

>>> import pandas as pd
>>> df = pd.read_csv('property-data.csv')
>>> df
           PID  ST_NUM     ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0   104.0      PUTNAM            Y            3        1  1000
1  100002000.0   197.0   LEXINGTON            N            3      1.5    --
2  100003000.0     NaN   LEXINGTON            N          NaN        1   850
3  100004000.0   201.0    BERKELEY           12            1      NaN   700
4          NaN   203.0    BERKELEY            Y            3        2  1600
5  100006000.0   207.0    BERKELEY            Y          NaN        1   800
6  100007000.0     NaN  WASHINGTON          NaN            2   HURLEY   950
7  100008000.0   213.0     TREMONT            Y            1        1   NaN
8  100009000.0   215.0     TREMONT            Y           na        2  1800
>>> df.fillna(12345, inplace = True)
>>> df
           PID   ST_NUM     ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH  SQ_FT
0  100001000.0    104.0      PUTNAM            Y            3        1   1000
1  100002000.0    197.0   LEXINGTON            N            3      1.5     --
2  100003000.0  12345.0   LEXINGTON            N        12345        1    850
3  100004000.0    201.0    BERKELEY           12            1    12345    700
4      12345.0    203.0    BERKELEY            Y            3        2   1600
5  100006000.0    207.0    BERKELEY            Y        12345        1    800
6  100007000.0  12345.0  WASHINGTON        12345            2   HURLEY    950
7  100008000.0    213.0     TREMONT            Y            1        1  12345
8  100009000.0    215.0     TREMONT            Y           na        2   1800
>>>

使用fillna()方法来替换指定列的空字段:

>>> import pandas as pd
>>> df = pd.read_csv('property-data.csv')
>>> df
           PID  ST_NUM     ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0   104.0      PUTNAM            Y            3        1  1000
1  100002000.0   197.0   LEXINGTON            N            3      1.5    --
2  100003000.0     NaN   LEXINGTON            N          NaN        1   850
3  100004000.0   201.0    BERKELEY           12            1      NaN   700
4          NaN   203.0    BERKELEY            Y            3        2  1600
5  100006000.0   207.0    BERKELEY            Y          NaN        1   800
6  100007000.0     NaN  WASHINGTON          NaN            2   HURLEY   950
7  100008000.0   213.0     TREMONT            Y            1        1   NaN
8  100009000.0   215.0     TREMONT            Y           na        2  1800
>>> df['PID'].fillna(12345, inplace = True)
>>> df
           PID  ST_NUM     ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0   104.0      PUTNAM            Y            3        1  1000
1  100002000.0   197.0   LEXINGTON            N            3      1.5    --
2  100003000.0     NaN   LEXINGTON            N          NaN        1   850
3  100004000.0   201.0    BERKELEY           12            1      NaN   700
4      12345.0   203.0    BERKELEY            Y            3        2  1600
5  100006000.0   207.0    BERKELEY            Y          NaN        1   800
6  100007000.0     NaN  WASHINGTON          NaN            2   HURLEY   950
7  100008000.0   213.0     TREMONT            Y            1        1   NaN
8  100009000.0   215.0     TREMONT            Y           na        2  1800
>>>

用均值/中位数/众数(mean(), median(), mode())填充空字段:

>>> import pandas as pd
>>> df = pd.read_csv('property-data.csv')
>>> x = df["ST_NUM"].mean()
>>>
>>> df
           PID  ST_NUM     ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0   104.0      PUTNAM            Y            3        1  1000
1  100002000.0   197.0   LEXINGTON            N            3      1.5    --
2  100003000.0     NaN   LEXINGTON            N          NaN        1   850
3  100004000.0   201.0    BERKELEY           12            1      NaN   700
4          NaN   203.0    BERKELEY            Y            3        2  1600
5  100006000.0   207.0    BERKELEY            Y          NaN        1   800
6  100007000.0     NaN  WASHINGTON          NaN            2   HURLEY   950
7  100008000.0   213.0     TREMONT            Y            1        1   NaN
8  100009000.0   215.0     TREMONT            Y           na        2  1800
>>> df["ST_NUM"].fillna(x, inplace = True)
>>> df
           PID      ST_NUM     ST_NAME OWN_OCCUPIED NUM_BEDROOMS NUM_BATH SQ_FT
0  100001000.0  104.000000      PUTNAM            Y            3        1  1000
1  100002000.0  197.000000   LEXINGTON            N            3      1.5    --
2  100003000.0  191.428571   LEXINGTON            N          NaN        1   850
3  100004000.0  201.000000    BERKELEY           12            1      NaN   700
4          NaN  203.000000    BERKELEY            Y            3        2  1600
5  100006000.0  207.000000    BERKELEY            Y          NaN        1   800
6  100007000.0  191.428571  WASHINGTON          NaN            2   HURLEY   950
7  100008000.0  213.000000     TREMONT            Y            1        1   NaN
8  100009000.0  215.000000     TREMONT            Y           na        2  1800
>>>

5.2 Pandas清洗格式错误数据

>>>
>>> import pandas as pd
>>> # 第三个日期格式错误
>>> data = {
...   "Date": ['2020/12/01', '2020/12/02' , '20201226'],
...   "duration": [50, 40, 45]
... }
>>> df = pd.DataFrame(data, index = ["day1", "day2", "day3"])
>>> df
            Date  duration
day1  2020/12/01        50
day2  2020/12/02        40
day3    20201226        45
>>> df['Date'] = pd.to_datetime(df['Date']) #格式错误日期转为正常格式
>>> df
           Date  duration
day1 2020-12-01        50
day2 2020-12-02        40
day3 2020-12-26        45
>>>

5.3 Pandas清洗错误数据

直接修改错误数据

>>> import pandas as pd
>>> person = {
...   "name": ['Google', 'Runoob' , 'Taobao'],
...   "age": [50, 40, 12345],
... }
>>> df = pd.DataFrame(person)
>>> df
     name    age
0  Google     50
1  Runoob     40
2  Taobao  12345          # age=12345是错误的
>>> df.loc[2, 'age'] = 30 # 直接修改这个数据.
>>> df
     name  age
0  Google   50
1  Runoob   40
2  Taobao   30
>>>

通过条件语句修改:

>>> import pandas as pd
>>> person = {
...   "name": ['Google', 'Runoob' , 'Taobao'],
...   "age": [50, 200, 12345],
... }
>>> df = pd.DataFrame(person)
>>> df
     name    age
0  Google     50
1  Runoob    200
2  Taobao  12345
>>>
>>> for x in df.index:
...   if df.loc[x, "age"] > 120:
...     df.loc[x, "age"] = 120
...
>>> df
     name  age
0  Google   50
1  Runoob  120
2  Taobao  120
>>>

通过条件语句删除错误数据:

>>> import pandas as pd
>>> person = {
...   "name": ['Google', 'Runoob' , 'Taobao'],
...   "age": [50, 40, 12345],
... }
>>> df = pd.DataFrame(person)
>>> df
     name    age
0  Google     50
1  Runoob     40
2  Taobao  12345          # age=12345是错误的
>>>
>>> for x in df.index:
...     if df.loc[x, "age"] > 120:
...         df.drop(x, inplace = True) # 删除错误数据
...
>>> df
     name  age
0  Google   50
1  Runoob   40
>>>

5.4 Pandas清洗重复数据

两个方法:
duplicated(): 如果数据有重复的, 返回True.
drop_duplicated(): 删除重复数据

>>> import pandas as pd
>>> person = {
...   "name": ['Google', 'Runoob', 'Runoob', 'Taobao'],
...   "age": [50, 40, 40, 23],
... }
>>> df = pd.DataFrame(person)
>>> df
     name  age
0  Google   50
1  Runoob   40
2  Runoob   40  # 这是一行重复
3  Taobao   23
>>> df.duplicated()
0    False
1    False
2     True  # 会返回True
3    False
dtype: bool
>>>
>>> df.drop_duplicates(inplace = True) # 删除第2行的重复数据.
>>> df
     name  age
0  Google   50
1  Runoob   40
3  Taobao   23
>>>

6. Pandas常用函数

6.1 读取数据

函数 说明
pd.read_csv(file_name) 读取CSV文件
pd.read_excel(file_name) 读取Excel文件
pd.read_sql(query, connection_object) 从SQL数据库中读取数据
pd.read_json(json_string) 从JSON字符串中读取数据
pd.read_html(url) 从HTML页面中读取数据
import pandas as pd

df = pd.read_csv('data.csv')

df = pd.read_excel('data.xlsx')

import sqlite3
conn = sqlite3.connect('database.db')
df = pd.read_sql('SELECT * FROM table_name', conn)


json_string = '{"name": "John", "age": 30, "city": "New York"}'
df = pd.read_json(json_string)

url = 'https://www.runoob.com'
dfs = pd.read_html(url)
df = dfs[0] # 选择第一个数据框

6.2 查看数据

函数 说明
df.head(n) 显示前n行数据
df.tail(n) 显示后n行数据
df.info() 显示数据的信息: 列名, 数据类型, 缺失值等
df.describe() 显示数据的基本统计信息: 均值, 方差, 最大值, 最小值等
df.shape() 显示数据的行数和列数

6.3 数据清洗

函数 说明
df.dropna() 删除包含缺失值的行或列
df.fillna(value) 将缺失值替换为指定的值
df.replace(old_value, new_value) 将指定值替换为新值
df.duplicated() 检查是否有重复的数据
df.drop_duplicated() 删除重复的数据

6.4 数据选择和切片

函数 说明
df[column_name] 选择指定的列
df.loc[row_index, column_name] 通过标签选择数据
df.loc[row_index, column_index] 通过位置择数据
df.ix[row_index, column_name] 通过行标签或位置选择数据
df.filter(items=[col_name1, col_name2]) 选择指定的列
df.filter(regex='regex') 选择列名匹配正则表达式的列
df.sample(n) 随机选择n行数据

6.5 数据排序

函数 说明
df.sort_values(column_name) 按照指定列的值排序
df.sort_values([col_name1, col_name2], ascending=[True, False]) 按照多个列的值排序
df.sort_index() 按照索引排序

6.6 数据分组和聚合

函数 说明
df.groupby(column_name) 按照指定列进行分组
df.aggregate(function_name) 对分组后的数据进行聚合操作
df.pivot_table(values, index, columns, aggfunc) 生成透视表

6.7 数据合并

函数 说明
pd.concat([df1, df2]) 将多个数据框按行或列合并
pd.mearge(df1, df2, on=column_name) 将多个数据框按指定列进行合并

6.8 数据选择和过滤

函数 说明
df.loc(row_indexer, column_indexer) 按标签选择行或列
df.iloc(row_indexer, column_indexer) 按位置选择行或列
df[df[column_name] > value] 选择列中满足条件的行
df.query('column_name > value') 使用字符串表达式选择列中满足条件的列

6.9 数据统计和描述

函数 说明
df.describe() 计算基本统计信息, 如均值, 标准差, 最小值, 最大值等
df.mean() 计算每列的平均值
df.median() 计算每列的中位数
df.mode() 计算每列的众数
df.count() 计算每列非缺失值的数量

6.10 例子:

>>> import pandas as pd
>>> df = pd.read_json('data.json')
>>> df
      name   age  gender  score
0    Alice  25.0  female   80.0
1      Bob   NaN    male   90.0
2  Charlie  30.0    male    NaN
3    David  35.0    male   70.0
>>>
>>> # 删除缺失值
>>> df1 = df.dropna()
>>> df1
    name   age  gender  score
0  Alice  25.0  female   80.0
3  David  35.0    male   70.0
>>>
>>> # 重命名列名
>>> df2 = df.rename(columns={'name': '姓名', 'age': '年龄', 'gender': '性别', 'score': '
成绩'})
>>> df2
        姓名    年龄      性别    成绩
0    Alice  25.0  female  80.0
1      Bob   NaN    male  90.0
2  Charlie  30.0    male   NaN
3    David  35.0    male  70.0
>>>
>>> # 按成绩排序
>>> df3 = df.sort_values(by='score', ascending=False)
>>> df3
      name   age  gender  score
1      Bob   NaN    male   90.0
0    Alice  25.0  female   80.0
3    David  35.0    male   70.0
2  Charlie  30.0    male    NaN
>>>
>>> # 按性别分组并计算平均年龄和成绩
>>> df4 = df.groupby('gender').agg({'age': 'mean', 'score': 'mean'})
>>> df4
         age  score
gender
female  25.0   80.0
male    32.5   80.0
>>>
>>> # 选择成绩大于等于90的行,并只保留姓名和成绩两列
>>> df5 = df.loc[df['score'] >= 80, ['name', 'score']]
>>> df5
    name  score
0  Alice   80.0
1    Bob   90.0
>>>
>>> # 计算每列的基本统计信息, 由于只有age和score的类型是数值, 所以只统计age和score两列.
>>> stats = df.describe()
>>> stats
        age  score
count   3.0    3.0
mean   30.0   80.0
std     5.0   10.0
min    25.0   70.0
25%    27.5   75.0
50%    30.0   80.0
75%    32.5   85.0
max    35.0   90.0
>>>
>>> # 计算每列的平均值, 由于只有age和score的类型是数值, 所以只对age和score两列计算平均值.
>>> mean = df.mean() 
>>> type(mean)
<class 'pandas.core.series.Series'>
>>> mean
age      30.0
score    80.0
dtype: float64
>>>
>>> # 计算每列的中位数
>>> median = df.median()
>>> median
age      30.0
score    80.0
dtype: float64
>>>
>>> # 计算每列的众数. 结果看着有点怪.
>>> mode = df.mode()
>>> type(mode)
<class 'pandas.core.frame.DataFrame'>
>>> mode
      name   age gender  score
0    Alice  25.0   male   70.0
1      Bob  30.0    NaN   80.0
2  Charlie  35.0    NaN   90.0
3    David   NaN    NaN    NaN
>>>
>>> # 计算每列非缺失值的数量
>>> count = df.count()
>>> type(count)
<class 'pandas.core.series.Series'>
>>> count
name      4
age       3
gender    4
score     3
dtype: int64
>>>

标签:...,name,python,37,NaN,df,NUM,pd,pandas
From: https://www.cnblogs.com/gaiqingfeng/p/18010158

相关文章

  • python之面向对象学习一
    面向对象的几个核心特性如下Class类一个类即是对一类拥有相同属性的对象。在类中定义了这些对象的都具备的属性(variables(data))、共同的方法Object对象一个对象即是一个类的实例化后实例,一个类必须经过实例化后方可在程序中调用,一个类可以实例化多个对象,每个对象亦可以有不同......
  • Liunx安装配置python3
    liunx系统下会默认自带python2.0版本,查看python版本:python-V官网下载python3.10.0版本上传到liunx机器解压并移动到/usr目录下#解压命令tar-zxvfPython-3.10.0.tgz#移动到/usr/python3目录下mkdir/usr/python3mvPython-3.10.0/usr/python3安装依赖包yumins......
  • python推荐音乐系统
    importpandasaspdimportnumpyasnpimporttimeimportsqlite3data_home='E:/python学习/项目/python推荐系统/Python实现音乐推荐系统/'读取数据triplet_dataset=pd.read_csv(filepath_or_buffer=data_home+'train_triplets.txt',......
  • 2 Python数据类型
    介绍Python3中主要的内置类型为:数字、列表、字典、类、实例和异常。通常使用到的基本数据类型有:数字(int、float、complex):int表示整数;float表示浮点数;complex表示复数,并用j或J表示虚数部分。布尔值(bool):实际上bool是int的子类型,其只有两种取值(True和False),并支持直接与int类型的......
  • 1 使用venv创建Python虚拟环境
    Python从3.3版本开始,自带虚拟环境配置包venv。虚拟环境下通过pip命令下载的Python包不会影响到系统中的Python,可以做到项目之间环境的分离(目前Pycharm新建环境默认使用这种方式)。创建环境py-mvenvenviron上述命令创建虚拟环境environ。激活环境首先进入Scripts文件夹:cde......
  • python简单加解密
    有的内容并不怕别人看,但仍想简单加解密一下,可以考虑以下代码:defencrypt(text):encrypted_text=""forcharintext:unicode_value=ord(char)+10#在原有的Unicode值上加上10encrypted_text+=chr(unicode_value)returnencrypted_tex......
  • (python)代码学习||2024.2.4||题目是codewars的【 All Balanced Parentheses】
    题目链接:https://www.codewars.com/kata/5426d7a2c2c7784365000783/pythondefbalanced_parens(n):'''Toconstructallthepossiblestringswithnpairsofbalancedparenthesesthisfunctionmakesuseofastackofitemswiththefoll......
  • python性能分析line_profiler
    在编程世界中,效率是王道。对于Python开发者来说,line_profiler是一把锐利的剑,能够深入代码的每一行,找出性能瓶颈。今天,就让我们一起深入探索line_profiler,学习如何用它为你的Python程序注入强心剂,让代码效率飞跃。line_profiler:性能分析的利器line_profiler是一个Python工具,专......
  • Iron Python中使用NLTK库
    因为我是程序员,所以会写各种语言的爬虫模版,对于使用NLTK库也是有很的经验值得大家参考的。其实总的来说,NLTK是一个功能强大的NLP工具包,为研究人员和开发者提供了丰富的功能和资源,用于处理和分析文本数据。使用非常方便,而且通俗易懂,今天我将例举一些问题以供大家参考。1、问题背景......
  • python爬虫爬取豆瓣电影top250并写入Excel中
    importrequestsimportreimportopenpyxl#创建工作表wb=openpyxl.Workbook()ws=wb.active#调整列距forletterin['B','C']:ws.column_dimensions[letter].width=66#发送网络请求headers={"User-Agent":'Mozilla/5.0(WindowsNT10.0;Win64;x64)......