一.选题背景
随着互联网的发展,视频弹幕网站(如bilibili, youtube等)越来越流行,弹幕的信息通过视频在用户间分享流转,使弹幕具有了传播的特点。弹幕的信息包含了用户的主观情感,用户能在文字中加入情感色彩的词藻,使弹幕具有了描述人类主观喜好、赞赏、感觉等情感的特点。弹幕在传播过程中可能会在某个时间节点或者某个用户参与后,其热议程度呈井喷式增长。因此,对弹幕的各项信息进行分析对视频创造者和视频平台都具有重大的意义。
二.主题式网络爬虫设计方案
- 该网络爬虫为Bilibili弹幕数据爬虫
- 旨在爬取Bilibili的弹幕文本数据,以及发布时间,对文本进行情感极性分析,对弹幕发布日期进行统计,总结相关规律,为Up主及相关运营工作人员提供参考。
- 设计方案:
首先分析B站网页端结构,寻找规律,找出弹幕位于网页的位置。
再将爬取的数据进行持久化处理,后进行各项分析。
三.主题页面的结构特征分析
Bilibili的弹幕数据虽然出现在视频上的。实际上在网页中,弹幕是被隐藏在源代码中,以XML的数据格式进行加载。且弹幕数据的文档链接构成为https://comment.bilibili.com/cid.xml,即以一个固定的url地址+视频的cid+.xml组成。只要找到你想要的视频cid,替换这个url就可以爬取所有弹幕。而视频的cid可以在网页源代码中通过搜索cid轻松查询到。获取到的弹幕xml文件如下所示:
其中各个参数分别表示:
stime: 弹幕出现时间 (s)
mode: 弹幕类型 (< 7 时为普通弹幕)
size: 字号
color: 文字颜色
date: 发送时间戳
pool: 弹幕池ID
author: 发送者ID
dbid: 数据库记录ID(单调递增)
使用正则表达式,从xml文件中筛选出关键信息,数据获取环节结束。
一.网络爬虫程序设计
- 数据爬取
导入requests库,使用request.get方法访问弹幕url::
import requests
为了避免受反爬机制限制,添加headers
headers = {
'origin': 'https://www.bilibili.com',
'referer': 'https://www.bilibili.com/video/BV19E41197Kc',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36',
}
然后发送请求即可。
结合三中所叙述的解析方式,即可获得该视频所以的弹幕信息。
保存为csv文件即可。
具体代码如下:
def get_data():
headers = {
'origin': 'https://www.bilibili.com',
'referer': 'https://www.bilibili.com/video/BV19E41197Kc',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36',
}
url = input('请输入B站视频链接: ')
res = requests.get(url)
cid = re.findall(r'"cid":(.*?),', res.text)[-1]
url = f'https://comment.bilibili.com/{cid}.xml'
res = requests.get(url, headers=headers)
xml_content = res.content.decode('utf-8')
re_patern = '<d p="(.*?)">(.*?)</d>'
comments = re.findall(re_patern, xml_content)
danmus = []
for item in comments:
danmus.append(','.join(item))
# 列标
headers = ['stime', 'mode', 'size', 'color', 'date', 'pool', 'author', 'dbid', ' ', 'text']
headers = ','.join(headers)
danmus.insert(0, headers)
# 弹幕数据结果保存为danmus.csv
with open('danmus.csv', 'w', encoding='utf_8_sig') as f:
data = []
for line in danmus:
data.append(line + '\n')
f.writelines(data)
- 数据清洗
由于部分数据经过正则表达式筛选后,仍然不符合统一格式,在读入数据时直接进行异常输出忽略,实现清洗。具体代码如下:
df = pd.read_csv('danmus.csv',error_bad_lines=False)
- 数据可视化
(1) 弹幕词云
实现代码如下:
def word_cloud_main():
with open('danmus.csv', encoding='utf-8') as f:
lst = []
for line in f.readlines():
tmp = line.split(',')[-1]
lst.append(tmp)
text = " ".join(lst)
words = jieba.cut(text)
_dict = {}
for word in words:
if len(word) >= 2:
_dict[word] = _dict.get(word, 0) + 1
items = list(_dict.items())
items.sort(key=lambda x: x[1], reverse=True)
# 设置字体 保证正常显示中文
plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.size'] = '8'
plt.rcParams['font.sans-serif'] = ['SimHei']
print(items)
w = wordcloud.WordCloud(
width=1000, height=700,
background_color="white",
font_path="msyh.ttc",
max_words=30,
)
w.generate_from_frequencies(_dict) # 以词云生成词云
# 保存词云图
w.to_file("wordcloud.png")运行结果:
(2)极性分析并绘制极性饼图图
具体代码如下:
def ans_emotion():
with open('danmus.csv', encoding='utf-8') as f:
text = []
for line in f.readlines()[1:]:
text.append(line.split(',')[-1])
emotions = {
'positive': 0,
'negative': 0,
'neutral': 0
}
for item in text:
if SnowNLP(item).sentiments > 0.6:
emotions['positive'] += 1
elif SnowNLP(item).sentiments < 0.4:
emotions['negative'] += 1
else:
emotions['neutral'] += 1
# 设置字体 保证正常显示中文
plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.size'] = '14'
plt.rcParams['font.sans-serif'] = ['SimHei']
# print(emotions.keys())
# print(emotions.values())
plt.pie(emotions.values(),
labels=emotions.keys(), # 设置饼图标签
)
plt.title("弹幕情感极性分析饼图") # 设置标题
plt.savefig('弹幕情感极性分析饼图.png')
plt.show()运行结果:
(3)弹幕数趋势图
具体代码如下:
def line_chart():
warnings.filterwarnings("ignore")
col_lst = ['stime', 'mode', 'size', 'color', 'date', 'pool', 'author', 'dbid', ' ', 'text']
df = pd.read_csv('danmus.csv', error_bad_lines=False)
# print(df)
df.columns = col_lst
date_stamp = df['date']
res_date = []
# print(df['date'])
for date in date_stamp:
date = time.localtime(date)
str_date = time.strftime('%Y-%m-%d', date)
res_date.append(str_date)
# print(res_date)
res_date = pd.Series(res_date)
date_count = res_date.value_counts()
date_lst = []
count_lst = []
for i in date_count.index:
date_lst.append(i)
for i in date_count.values:
count_lst.append(i)
# print(date_lst)
# print(count_lst)
count_dict = {}
for i in range(len(date_lst)):
count_dict[date_lst[i]] = count_lst[i]
sorted_count = sorted(count_dict.items(), key=lambda x: x[0])
# print(sorted_count)
# date
x = []
# count
y = []
for i in sorted_count:
x.append(i[0])
for i in sorted_count:
y.append(i[1])
print(x)
print(y)
import matplotlib.pyplot as plt
fig1, ax = plt.subplots(figsize=(14, 9))
ax.plot(x, y)
xticks = list(range(0, len(x), 20))
xlabels = [x[i] for i in xticks]
xticks.append(len(x))
xlabels.append(x[-1])
ax.set_xticks(xticks)
ax.set_xticklabels(xlabels, rotation=80)
# 设置字体 保证正常显示中文
plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.size'] = '8'
plt.rcParams['font.sans-serif'] = ['SimHei']
ymajorLocator = MultipleLocator(10)
ax.yaxis.set_major_locator(ymajorLocator)
# ax.xaxis.set_major_locator(ticker.MultipleLocator(40))
# plt.title("Numbers-Date")
plt.title("弹幕数目——日期")
plt.show()运行结果:
一.总结
- 对随机一部番剧的弹幕发现,弹幕中的绝大多数为积极言论及中性言论,弹幕数量的分布接近正态分布。
- 不同的视频的视频弹幕分布不同,不能以偏概全,需要具体视频具体分析。
import re import requests # 获取数据 # 需要手动输入B站视频链接 """ 举例链接 https://www.bilibili.com/bangumi/play/ep17617 """ def get_data(): headers = { 'origin': 'https://www.bilibili.com', 'referer': 'https://www.bilibili.com/video/BV19E41197Kc', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36', } url = input('请输入B站视频链接: ') res = requests.get(url) cid = re.findall(r'"cid":(.*?),', res.text)[-1] url = f'https://comment.bilibili.com/{cid}.xml' res = requests.get(url, headers=headers) xml_content = res.content.decode('utf-8') re_patern = '<d p="(.*?)">(.*?)</d>' comments = re.findall(re_patern, xml_content) danmus = [] for item in comments: danmus.append(','.join(item)) # 列标 headers = ['stime', 'mode', 'size', 'color', 'date', 'pool', 'author', 'dbid', ' ', 'text'] headers = ','.join(headers) danmus.insert(0, headers) # 弹幕数据结果保存为danmus.csv with open('danmus.csv', 'w', encoding='utf_8_sig') as f: data = [] for line in danmus: data.append(line + '\n') f.writelines(data) import jieba import wordcloud # 生成词云 # 需要先爬取数据 再运行这个函数 def word_cloud_main(): with open('danmus.csv', encoding='utf-8') as f: lst = [] for line in f.readlines(): tmp = line.split(',')[-1] lst.append(tmp) text = " ".join(lst) words = jieba.cut(text) _dict = {} for word in words: if len(word) >= 2: _dict[word] = _dict.get(word, 0) + 1 items = list(_dict.items()) items.sort(key=lambda x: x[1], reverse=True) # 设置字体 保证正常显示中文 plt.rcParams['font.family'] = ['sans-serif'] plt.rcParams['font.size'] = '8' plt.rcParams['font.sans-serif'] = ['SimHei'] print(items) w = wordcloud.WordCloud( width=1000, height=700, background_color="white", font_path="msyh.ttc", max_words=30, ) w.generate_from_frequencies(_dict) # 以词云生成词云 # 保存词云图 w.to_file("wordcloud.png") # 显示词云图 plt.imshow(w) plt.axis("off") plt.show() # 用于分析情感极性的第三方库 from snownlp import SnowNLP import matplotlib.pyplot as plt import numpy as np # 对弹幕进行情感极性分析 # 并绘制积极 中性 消极 弹幕饼图 def ans_emotion(): with open('danmus.csv', encoding='utf-8') as f: text = [] for line in f.readlines()[1:]: text.append(line.split(',')[-1]) emotions = { 'positive': 0, 'negative': 0, 'neutral': 0 } for item in text: if SnowNLP(item).sentiments > 0.6: emotions['positive'] += 1 elif SnowNLP(item).sentiments < 0.4: emotions['negative'] += 1 else: emotions['neutral'] += 1 # 设置字体 保证正常显示中文 plt.rcParams['font.family'] = ['sans-serif'] plt.rcParams['font.size'] = '14' plt.rcParams['font.sans-serif'] = ['SimHei'] # print(emotions.keys()) # print(emotions.values()) plt.pie(emotions.values(), labels=emotions.keys(), # 设置饼图标签 ) plt.title("弹幕情感极性分析饼图") # 设置标题 plt.savefig('弹幕情感极性分析饼图.png') plt.show() # 弹幕数量折线图 import pandas as pd import warnings import time from matplotlib.ticker import MultipleLocator # 需要先对csv用pandas读入 # 再进行时间戳转换 # 从Timestamp 转为 year-month-day # 然后用value_counts方法进行单日弹幕数统计 # 并绘制出折线图 def line_chart(): warnings.filterwarnings("ignore") col_lst = ['stime', 'mode', 'size', 'color', 'date', 'pool', 'author', 'dbid', ' ', 'text'] df = pd.read_csv('danmus.csv', error_bad_lines=False) # print(df) df.columns = col_lst date_stamp = df['date'] res_date = [] # print(df['date']) for date in date_stamp: date = time.localtime(date) str_date = time.strftime('%Y-%m-%d', date) res_date.append(str_date) # print(res_date) res_date = pd.Series(res_date) date_count = res_date.value_counts() date_lst = [] count_lst = [] for i in date_count.index: date_lst.append(i) for i in date_count.values: count_lst.append(i) # print(date_lst) # print(count_lst) count_dict = {} for i in range(len(date_lst)): count_dict[date_lst[i]] = count_lst[i] sorted_count = sorted(count_dict.items(), key=lambda x: x[0]) # print(sorted_count) # date x = [] # count y = [] for i in sorted_count: x.append(i[0]) for i in sorted_count: y.append(i[1]) print(x) print(y) import matplotlib.pyplot as plt fig1, ax = plt.subplots(figsize=(14, 9)) ax.plot(x, y) xticks = list(range(0, len(x), 20)) xlabels = [x[i] for i in xticks] xticks.append(len(x)) xlabels.append(x[-1]) ax.set_xticks(xticks) ax.set_xticklabels(xlabels, rotation=80) # 设置字体 保证正常显示中文 plt.rcParams['font.family'] = ['sans-serif'] plt.rcParams['font.size'] = '8' plt.rcParams['font.sans-serif'] = ['SimHei'] ymajorLocator = MultipleLocator(10) ax.yaxis.set_major_locator(ymajorLocator) # ax.xaxis.set_major_locator(ticker.MultipleLocator(40)) # plt.title("Numbers-Date") plt.title("弹幕数目——日期") plt.show() if __name__ == '__main__': """ 举例链接 https://www.bilibili.com/bangumi/play/ep17617 """ get_data() word_cloud_main() ans_emotion() line_chart()