首页 > 其他分享 >Day 22 22.1.2:增量式爬虫 - 场景2的实现

Day 22 22.1.2:增量式爬虫 - 场景2的实现

时间:2023-02-22 16:36:58浏览次数:48  
标签:en 22 docs html item scrapy 22.1 download Day

场景2的实现:

数据指纹

  • 使用详情页的url充当数据指纹即可。

创建爬虫爬虫文件:

  • cd project_name(进入项目目录)
  • scrapy genspider 爬虫文件的名称(自定义一个名字即可) 起始url
    • (例如:scrapy genspider first www.xxx.com)
  • 创建成功后,会在爬虫文件夹下生成一个py的爬虫文件

进入爬虫文件:

  • cd 爬虫文件的名称(即自定义的名字)

爬虫文件

import scrapy
import redis
import random
from ..items import Zlsdemo2ProItem
class JianliSpider(scrapy.Spider):
    name = "jianli"
    # allowed_domains = ["www.xxx.com"]
    start_urls = ["https://sc.chinaz.com/jianli/free.html"]
        # 连接数据库
    conn = redis.Redis(
        host='127.0.0.1',
        port=6379,
    )
    def parse(self, response):
        div_list = response.xpath('//*[@id="container"]/div')
        for div in div_list:
            title = div.xpath('./p/a/text()').extract_first()
            detail_url = div.xpath('./p/a/@href').extract_first()
            # print(title, detail_url)
            ex = self.conn.sadd('data_id', detail_url)
            #建立item对象
            item = Zlsdemo2ProItem()
            item['title'] = title#将title传给item
            if ex == 1:#数据库中没有存储爬取到的数据
                print('有最新数据更新,正在采集中......')
                #做请求传参,将title通过meta传递给parse_detail函数
                yield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item})

            else:#数据库中已存储爬取到的数据
                print('暂无数据更新!')

    def parse_detail(self,response):
        item = response.meta['item']
        download_li = response.xpath('//*[@id="down"]/div[2]/ul/li')
        download_urls = []
        for li in download_li:
            download_url_e = li.xpath('./a/@href').extract_first()
            download_urls.append(download_url_e)
        download_url = random.choice(download_urls)
        item['download_url'] = download_url

        yield item

items.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class Zlsdemo2ProItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    title = scrapy.Field()
    download_url = scrapy.Field()
    

pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter


class Zlsdemo2ProPipeline:
    def process_item(self, item, spider):

        conn = spider.conn
        conn.lpush('data1', item)

        return item

settings

# Scrapy settings for zlsDemo2Pro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = "zlsDemo2Pro"

SPIDER_MODULES = ["zlsDemo2Pro.spiders"]
NEWSPIDER_MODULE = "zlsDemo2Pro.spiders"


# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR'
LOG_LEVEL = 'WARNING'

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
#    "Accept-Language": "en",
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    "zlsDemo2Pro.middlewares.Zlsdemo2ProSpiderMiddleware": 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    "zlsDemo2Pro.middlewares.Zlsdemo2ProDownloaderMiddleware": 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    "scrapy.extensions.telnet.TelnetConsole": None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   "zlsDemo2Pro.pipelines.Zlsdemo2ProPipeline": 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"

# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"

标签:en,22,docs,html,item,scrapy,22.1,download,Day
From: https://www.cnblogs.com/dream-ze/p/17144805.html

相关文章

  • 0221模拟赛(N≡N)
    \(~~~~\)我只能膜拜氮老师!「CTSC2015」日程管理题意\(~~~~\)\(n\)次操作,每次在\(T\)时间内加入一个截止日期\(t\),价值为\(p\)的任务,或删除一个已有的任务。若......
  • 各种情况的箭头函数 es6 230222
    无参无返回varfn=()=>{console.log(666)}fn()无参有返回varfn=()=>{return123}varres=fn()alert(res)有参无返回varfn=(num1,num2)=>{cons......
  • 使用剩余参数完成不定长参函数定义 es6 230222
    需求定义一个方法接收任意多个参数返回它们的和技能点在形参前加上三个点可以让这个形参变成 数组这个数组可以接收无限多个数据我们可以在方法体中遍历数组进行想要的操作......
  • 2023.2.22软件工程学习日报
      所花时间:代码量:博客量:3了解到的知识点:今天首先在安装了AndroidStudio的基础上了解了以下几点内容1、Android的项目架构(某个文件夹具体是干什么的)2、自带的SQLi......
  • vue-day08——vue3介绍、vue3项目创建、setup函数、ref和reactive、计算属性和监听属
    目录一、vue3介绍1.性能的提升2.源码的升级3.拥抱TypeScript4.新的特性5组合式API和配置项API5.1OptionsAPI存在的问题5.2CompositionAPI的优势5.3组合式API和配置......
  • js中的函数的各种形态 230222
    标准函数functionfn(){console.log(1111)}fn()匿名函数等号右边是匿名函数varfn=function(){console.log(222)}fn()自启动函数本质还是匿名函数(function()......
  • 《分布式技术原理与算法解析》学习笔记Day19
    分布式通信:消息队列什么是消息队列?队列是一种具有先进先出特点的数据结构,消息队列是基于队列实现的、存储具有特定格式的消息数据。消息以特定格式放入这个队列的尾部后......
  • Windows 11 22H2 跳过微软账户登录系统
    1、安装完Windows11系统最新版22H2后,进入到网络设置界面时,使用快捷键Shift+F10打开“命令提示符”(注意:部分笔记本快捷键为Fn+Shift+F10)2、在弹出的命令窗口输入以......
  • 2023/2/22
    Java是一种广泛使用的编程语言,尤其是在移动应用程序开发中。我最近学习了Android的基础控件和布局,包括TextView,Button,ImgView,LinearLayout,RelativeLayout,GridLayout和Scrol......
  • Day 22 22.1.1:增量式爬虫 - 场景1的实现
    场景1的实现:创建爬虫爬虫文件:cdproject_name(进入项目目录)scrapygenspider爬虫文件的名称(自定义一个名字即可)起始url(例如:scrapygenspiderfirstwww.xxx.com)......