scrapy架构的初步试用
scrapy架构的基本介绍
# 引擎(EGINE)
引擎负责控制系统所有组件之间的数据流,并在某些动作发生时触发事件。有关详细信息,请参见上面的数据流部分。
# 调度器(SCHEDULER)
用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL的优先级队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址
# 下载器(DOWLOADER)
用于下载网页内容, 并将网页内容返回给EGINE,下载器是建立在twisted这个高效的异步模型上的
# 爬虫(SPIDERS)--->在这里写代码
SPIDERS是开发人员自定义的类,用来解析responses,并且提取items,或者发送新的请求
# 项目管道(ITEM PIPLINES)
在items被提取后负责处理它们,主要包括清理、验证、持久化(比如存到数据库)等操作
# 下载器中间件(Downloader Middlewares)
位于Scrapy引擎和下载器之间,主要用来处理从EGINE传到DOWLOADER的请求request,已经从DOWNLOADER传到EGINE的响应response,你可用该中间件做以下几件事:设置请求头,设置cookie,使用代理,集成selenium
# 爬虫中间件(Spider Middlewares)
位于EGINE和SPIDERS之间,主要工作是处理SPIDERS的输入(即responses)和输出(即requests)
scrapy解析数据
1 response对象有css方法和xpath方法
-css中写css选择器
-xpath中写xpath选择
2 重点1:
-xpath取文本内容
'.//a[contains(@class,"link-title")]/text()'
-xpath取属性
'.//a[contains(@class,"link-title")]/@href'
-css取文本
'a.link-title::text'
-css取属性
'img.image-scale::attr(src)'
3 重点2:
.extract_first() 取一个
.extract() 取所有
class CnblogsSpider(scrapy.Spider):
name = 'cnblogs'
allowed_domains = ['www.cnblogs.com']
start_urls = ['http://www.cnblogs.com/']
def parse(self, response):
# response类似于requests模块的response对象
# print(response.text)
# 返回的数据,解析数据
# 方式一:使用bs4的方法,我们有了scrapy之后就不使用用这种方法了
# soup = BeautifulSoup(response.text,'lxml')
# article_list = soup.find_all(class_='post-item')
# for article in article_list:
# title_name = article.find(name='a',class_='post-item-title').text
# print(title_name)
# 方式二:scrapy自带的解析(css,xpath)
# css解析
# article_list = response.css('article.post-item')
# for article in article_list:
# title_name = article.css('section>div>a>::text').extract_first()
# author_img = article.css('p.post-item-summary>a>img::attr(src)').extract_first()
# desc_list = article.css('p.post-item-summary::text').extract()
# desc = desc_list[0].replace('\n', '').replace(' ', '')
# if not desc:
# desc = desc_list[1].replace('\n', '').replace(' ', '')
# author_name = article.css('section>footer>a>span::text').extract_first()
# article_date = article.css('section>footer>span>span::text').extract_first()
# print(f"""
# 文章标题:{title_name}
# 作者头像:{author_img}
# 文章摘要:{desc}
# 作者名字:{author_name}
# 发布日期:{article_date}
# """)
data_list = []
article_list = response.xpath('//article[contains(@class,"post-item")]')
for article in article_list:
title_name = article.xpath('./section/div/a/text()').extract_first()
author_img = article.xpath('./section/div/p//img/@src').extract_first()
desc_list = article.xpath('./section/div/p/text()').extract()
desc = desc_list[0].replace('\n', '').replace(' ','')
if not desc:
desc = desc_list[1].replace('\n', '').replace(' ','')
author_name = article.xpath('./section/footer/a/span/text()').extract_first()
article_date = article.xpath('./section/footer/span/span/text()').extract_first()
print(f"""
文章标题:{title_name}
作者头像:{author_img}
文章摘要:{desc}
作者名字:{author_name}
发布日期:{article_date}
""")
data_list.append({'title_name':title_name,'author_img':author_img,
'desc':desc,'author_name':author_name,
'article_date':article_date
})
return data_list
settings相关配置
#1 是否遵循爬虫协议
ROBOTSTXT_OBEY = False
#2 LOG_LEVEL 日志级别
LOG_LEVEL='ERROR' # 报错如果不打印日志,在控制台看不到错误
# 3 USER_AGENT
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36'
# 4 DEFAULT_REQUEST_HEADERS 默认请求头
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# 5 SPIDER_MIDDLEWARES 爬虫中间件
#SPIDER_MIDDLEWARES = {
# 'cnblogs.middlewares.CnblogsSpiderMiddleware': 543,
#}
# 6 DOWNLOADER_MIDDLEWARES 下载中间件
#DOWNLOADER_MIDDLEWARES = {
# 'cnblogs.middlewares.CnblogsDownloaderMiddleware': 543,
#}
# 7 ITEM_PIPELINES 持久化配置
#ITEM_PIPELINES = {
# 'cnblogs.pipelines.CnblogsPipeline': 300,
#}
#8 爬虫项目名字
BOT_NAME = 'myfirstscrapy'
#9 指定爬虫类的py文件的位置
SPIDER_MODULES = ['myfirstscrapy.spiders']
NEWSPIDER_MODULE = 'myfirstscrapy.spiders'
提高爬虫的爬取效率
#1 增加并发:默认16
默认scrapy开启的并发线程为32个,可以适当进行增加。在settings配置文件中修改
CONCURRENT_REQUESTS = 100
值为100,并发设置成了为100。
#2 降低日志级别:
在运行scrapy时,会有大量日志信息的输出,为了减少CPU的使用率。可以设置log输出信息为INFO或者ERROR即可。在配置文件中编写:
LOG_LEVEL = 'INFO'
# 3 禁止cookie:
如果不是真的需要cookie,则在scrapy爬取数据时可以禁止cookie从而减少CPU的使用率,提升爬取效率。在配置文件中编写:
COOKIES_ENABLED = False
# 4 禁止重试:
对失败的HTTP进行重新请求(重试)会减慢爬取速度,因此可以禁止重试。在配置文件中编写:
RETRY_ENABLED = False
# 5 减少下载超时:
如果对一个非常慢的链接进行爬取,减少下载超时可以能让卡住的链接快速被放弃,从而提升效率。在配置文件中进行编写:
DOWNLOAD_TIMEOUT = 10 超时时间为10s
python爬取数据,数据的持久化方案
# 保存到硬盘上---》持久化
# 两种方案,第二种常用
-第一种:了解
-解析函数中parse,要return [{},{},{}]
-scrapy crawl cnblogs -o 文件名(json,pickle,csv结尾)
-方案二:使用pipline 常用的,管道形式,可以同时存到多个位置的
-1 在items.py中写一个类[相当于写django的表模型],继承scrapy.Item
-2 在类中写属性,写字段,所有字段都是scrapy.Field类型
title = scrapy.Field()
-3 在爬虫中导入类,实例化得到对象,把要保存的数据放到对象中
item['title'] = title 【不要使用. 放】
解析类中 yield item
-4 修改配置文件,指定pipline,数字表示优先级,越小越大
ITEM_PIPELINES = {
'crawl_cnblogs.pipelines.CrawlCnblogsPipeline': 300,
}
-5 写一个pipline:CrawlCnblogsPipeline
-open_spider:数据初始化,打开文件,打开数据库链接
-process_item:真正存储的地方
-一定不要忘了return item,交给后续的pipline继续使用
-close_spider:销毁资源,关闭文件,关闭数据库链接
爬取某网站文章
pipelines.py文件
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
import pymysql
class CnblogsMysqlPipeline:
def open_spider(self, spider):
self.conn = pymysql.connect(
user='',
password='',
host='',
database='',
port=3306,
autocommit=True
)
self.cursor = self.conn.cursor()
def process_item(self, item, spider):
print(item)
self.cursor.execute(
'insert into cnblogs (title_name,author_img,`desc`,author_name,article_date,url,article_content) value (%s,%s,%s,%s,%s,%s,%s)',
args=[item['title_name'], item['author_img'], item['desc'], item['author_name'], item['article_date'],item['url'],item['article_content']])
# self.conn.commit()
# print(item)
return item
def close_spider(self, spider):
# print(spider.start_urls)
print('关闭了')
self.cursor.close()
self.conn.close()
# class CnblogsFilesPipeline:
# def open_spider(self, spider):
# # print(spider.name)
# print('启动了')
# self.f = open('cnblogs.txt', 'at', encoding='utf-8')
#
# def process_item(self, item, spider):
# # print(item)
# # with open('cnblogs.txt','at',encoding='utf-8')as f:
# # f.write('文章标签:%s,文章作者:%s\n'%(item['title_name'],item['author_name']))
# # return item
# self.f.write('文章标签:%s,文章作者:%s\n' % (item['title_name'], item['author_name']))
# print(item)
# return item
#
# def close_spider(self, spider):
# # print(spider.start_urls)
# print('关闭了')
# self.f.close()
爬取网页文件
# 第一页爬完后,要保存的数据已经保存了
#接下来要做两个事:
1 继续爬取下一页:解析出下一页的地址,包装成request对象
2 继续爬取详情页:解析出详情页地址,包装成request对象
# 现在在这不能保存了,因为数据不全,缺了文章详情,把文章详情加入后,再一次性保存
# Request创建:在parse中,for循环中,创建Request对象时,传入meta
yield Request(url=url, callback=self.detail_parse,meta={'item':item})
# Response对象:detail_parse中,通过response取出meta取出item,把文章详情写入
yield item
import scrapy
from bs4 import BeautifulSoup
from ..items import CnblogsItem
from scrapy import Request
class CnblogsSpider(scrapy.Spider):
name = 'cnblogs'
allowed_domains = ['www.cnblogs.com']
start_urls = ['http://www.cnblogs.com/']
def parse(self, response):
# item = CnblogsItem()
article_list = response.xpath('//article[contains(@class,"post-item")]')
for article in article_list:
item = CnblogsItem()
title_name = article.xpath('./section/div/a/text()').extract_first()
author_img = article.xpath('./section/div/p//img/@src').extract_first()
desc_list = article.xpath('./section/div/p/text()').extract()
desc = desc_list[0].replace('\n', '').replace(' ', '')
if not desc:
desc = desc_list[1].replace('\n', '').replace(' ', '')
author_name = article.xpath('./section/footer/a/span/text()').extract_first()
article_date = article.xpath('./section/footer/span/span/text()').extract_first()
url = article.xpath('./section/div/a/@href').extract_first()
item['title_name'] = title_name
item['author_img'] = author_img
item['desc'] = desc
item['author_name'] = author_name
item['article_date'] = article_date
item['url'] = url
# print(item)
yield Request(url=url,callback=self.detail_parse,meta={'item':item})
next_url = 'https://www.cnblogs.com' + response.css('div.pager>a:last-child::attr(href)').extract_first()
print(next_url)
yield Request(url=next_url,callback=self.parse)
def detail_parse(self,response):
item = response.meta.get('item')
article_content = response.css('div.post').extract_first()
item['article_content'] = str(article_content)
yield item
# class CnblogsSpider(scrapy.Spider):
# name = 'cnblogs'
# allowed_domains = ['www.cnblogs.com']
# start_urls = ['http://www.cnblogs.com/']
#
# def parse(self, response):
# # response类似于requests模块的response对象
# # print(response.text)
# # 返回的数据,解析数据
# # 方式一:使用bs4的方法,我们有了scrapy之后就不使用用这种方法了
# # soup = BeautifulSoup(response.text,'lxml')
# # article_list = soup.find_all(class_='post-item')
# # for article in article_list:
# # title_name = article.find(name='a',class_='post-item-title').text
# # print(title_name)
#
# # 方式二:scrapy自带的解析(css,xpath)
# # css解析
# # article_list = response.css('article.post-item')
# # for article in article_list:
# # title_name = article.css('section>div>a>::text').extract_first()
# # author_img = article.css('p.post-item-summary>a>img::attr(src)').extract_first()
# # desc_list = article.css('p.post-item-summary::text').extract()
# # desc = desc_list[0].replace('\n', '').replace(' ', '')
# # if not desc:
# # desc = desc_list[1].replace('\n', '').replace(' ', '')
# # author_name = article.css('section>footer>a>span::text').extract_first()
# # article_date = article.css('section>footer>span>span::text').extract_first()
# # print(f"""
# # 文章标题:{title_name}
# # 作者头像:{author_img}
# # 文章摘要:{desc}
# # 作者名字:{author_name}
# # 发布日期:{article_date}
# # """)
# data_list = []
# article_list = response.xpath('//article[contains(@class,"post-item")]')
# for article in article_list:
# title_name = article.xpath('./section/div/a/text()').extract_first()
# author_img = article.xpath('./section/div/p//img/@src').extract_first()
# desc_list = article.xpath('./section/div/p/text()').extract()
# desc = desc_list[0].replace('\n', '').replace(' ','')
# if not desc:
# desc = desc_list[1].replace('\n', '').replace(' ','')
# author_name = article.xpath('./section/footer/a/span/text()').extract_first()
# article_date = article.xpath('./section/footer/span/span/text()').extract_first()
# print(f"""
# 文章标题:{title_name}
# 作者头像:{author_img}
# 文章摘要:{desc}
# 作者名字:{author_name}
# 发布日期:{article_date}
# """)
# data_list.append({'title_name':title_name,'author_img':author_img,
# 'desc':desc,'author_name':author_name,
# 'article_date':article_date
# })
# return data_list
爬虫和下载中间件
# scrapy的所有中间件都写在middlewares.py中,跟djagno非常像,做一些拦截
# 爬虫中间件(用的很少,了解即可)
MyfirstscrapySpiderMiddleware
def process_spider_input(self, response, spider): # 进入爬虫会执行它
def process_spider_output(self, response, result, spider): #从爬虫出来会执行它
def process_spider_exception(self, response, exception, spider):#出了异常会执行
def process_start_requests(self, start_requests, spider):#第一次爬取执行
def spider_opened(self, spider): #爬虫开启执行
# 下载中间件
MyfirstscrapyDownloaderMiddleware
def process_request(self, request, spider): # request对象从引擎进入到下载器会执行
def process_response(self, request, response, spider):# response对象从下载器进入到引擎会执行
def process_exception(self, request, exception, spider):#出异常执行它
def spider_opened(self, spider): #爬虫开启执行它
#重点:process_request,process_response
# 下载中间件的process_request
-返回值:
- return None: 继续执行下面的中间件的process_request
- return a Response object: 不进入下载中间件了,直接返回给引擎,引擎把它通过6给爬虫
- return a Request object:不进入中间件了,直接返回给引擎,引擎把它放到调度器中
- raise IgnoreRequest: process_exception() 抛异常,会执行process_exception
# 下载中间件的process_response
-返回值:
- return a Response object:正常,会进入到引擎,引擎把它给爬虫
- return a Request object: 会进入到引擎,引擎把它放到调度器中,等待下次爬取
- raise IgnoreRequest 会执行process_exception
标签:架构,name,author,list,item,scrapy,试用,article,desc
From: https://www.cnblogs.com/joseph-bright/p/16964579.html