首页 > 其他分享 >scarpy基础

scarpy基础

时间:2023-03-08 22:56:07浏览次数:31  
标签:en settings docs 基础 item scrapy html scarpy

1. 创建项目
    scrapy startproject 项目名称
2. 进入项目
    cd 项目名称
3. 创建爬虫
    scrapy genspider 名字 域名
4. 可能需要start_urls,修改成你要抓取的那个页面
    进入spiders里面,会看见一个按你创建名字的python文件
    例如:    scrapy genspider xiao www.4399.com
    那么在spiders里面就有一个xiao.py
    然后start_urls就是放其实页面url
5. 对数据进行解析,在spider里面的parse(response)方法中进行解析
     def parse(self, response):  # 该方法默认是用来处理解析的
        response.text# 拿到页面源代码
        response.xpath()
        response.css()

        解析数据的时候,需要注意,默认xpath()返回的是selector对象,
        想要拿到数据必须使用extrace() 提取数据
        extract() 返回列表
        extract_first() 返回一个数据

        yield 返回数据 -> 把数据交给pipeline来进行持久化存储

6.在pipeline中完成数据的存储
    # 记住:管道默认是不生效的,需要取settings里面取开启管道
    class 类名:
        # 处理数据的方法,item:数据,spider:爬虫
        def process_item(self, item, spider):
            item:数据
            spider:爬虫
            return item #必须要return东西,否则下一个管道收不到数据

7.设置settings.py文件将pipeline进行生效设置
    ITEM_PIPELINES = {
        # key就是管道的路径
        # value就是管道的优先级
        '管道的路径': 优先级, 优先级级越低,优先级越高
        例:"game.pipelines.GamePipeline": 300,
    }
    
    # 在settings.py里面添一条下面的语句,那个警告以下的日志就不会出现
    LOG_LEVEL = 'WARNING'  # 警告及警告以上才出现

8.运行爬虫
    scrapy crawl 爬虫的名字
    例: scrapy crawl xiao

 

  xiao.py
import scrapy

class XiaoSpider(scrapy.Spider):
    name = "xiao"  # 爬虫名字
    allowed_domains = ["www.4399.com"]  # 允许的域名
    start_urls = ["https://www.4399.com/"]  # 起始页面url

    def parse(self, response):  # 该方法默认是用来处理解析的
        # 本来是用来解析的
        # print(response)
        # 拿到页面源代码
        # print(response.text)

        # 获取页面所有中游戏名字
        # txt = response.xpath('//*[@id="skinbody"]/div[10]/div[1]/div[1]/ul/li/a/text()').extract() # 提取内容
        # print(txt)

        # 分块提取数据
        li_list = response.xpath('//*[@id="skinbody"]/div[10]/div[1]/div[1]/ul/li')
        for li in li_list:
            # 如果在后面添[0]取出来成字符串的话,如果该元素是空的就会报错
            # name = li.xpath('./a/text()').extract()[0]
            # extract_first提取一项内容,如果没有,返回None
            name = li.xpath('./a/text()').extract_first()
            # print(name)
            # break  # 测试
            dic = {
                'name':name
            }

            # 需要用yield将数据传递给管道
            yield dic # 如果返回的是数据,直接可以认为是给了管道pipelines

 

pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter


# 记住:管道默认是不生效的,需要取settings里面取开启管道
class GamePipeline:
    # 处理数据的方法,item:数据,spider爬虫
    def process_item(self, item, spider):
        print(item)
        print(spider.name)
        return item

class NewPipeline:
    # 处理数据的方法,item:数据,spider爬虫
    def process_item(self, item, spider):
        item['love'] = '陶喆'
        return item
 

 

settings.py

# Scrapy settings for game project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = "game"

SPIDER_MODULES = ["game.spiders"]
NEWSPIDER_MODULE = "game.spiders"

LOG_LEVEL = 'WARNING'  # 警告及警告以上才出现
# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = "game (+http://www.yourdomain.com)"

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
# COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False

# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
#    "Accept-Language": "en",
# }

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
# SPIDER_MIDDLEWARES = {
#    "game.middlewares.GameSpiderMiddleware": 543,
# }

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    "game.middlewares.GameDownloaderMiddleware": 543,
# }

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#    "scrapy.extensions.telnet.TelnetConsole": None,
# }

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    # key就是管道的路径
    # value就是管道的优先级
    "game.pipelines.GamePipeline": 300,
    "game.pipelines.NewPipeline": 299,  # 数字越低优先级越高
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = "httpcache"
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"

# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"

标签:en,settings,docs,基础,item,scrapy,html,scarpy
From: https://www.cnblogs.com/Wesuiliye/p/17190014.html

相关文章

  • 基础11:约束
    一、约束(constraint)概述1.1为什么需要约束数据完整性(DataIntegrity)是指数据的精确性(Accuracy)和可靠性(Reliability)。它是防止数据库中存在不符合语义规定的数据和防......
  • 寒假集训——基础数论6 线性代数
    矩阵定义简单来说矩阵就是一个\(n\)行\(r\)列的阵,实在不行可以理解成一个二维数组\[%开始数学环境\left[%左括号\begin{array}{ccc}......
  • Python实战项目-9 Redis/celery-基础使用
    Redis介绍与安装Redis->缓存数据库【大部分时间用来做缓存,不仅仅可以做缓存】也是称为非关系型数据库,区别与Mysql关系型数据库-noSql:泛指非关系型数据库,notonlySql......
  • C# 7.0 添加和增强的功能【基础篇】
    C#7.0添加和增强的功能【基础篇】 阅读目录一、out变量二、值元组(ValueTuple)三、析构元组和其他类型 四、析构函数五、模式匹配六、本地函数 七、扩展......
  • C# 6.0 添加和增强的功能【基础篇】
    C#6.0添加和增强的功能【基础篇】 阅读目录一、静态导入 二、异常筛选器 三、自动属性初始化表达式 四、Expressionbodied成员(表达式主体定义=>) 五、......
  • sqlalchemy 基础查询操作
    in查询db.query(UserAccount#模型名称).filter(account_type.in_(['1','2','3'])).all()array_agg聚合查询db.query(func.min(UserAccount.username)#去重,......
  • web component基础概念及使用
    概念和使用作为开发者,我们都知道尽可能多的重用代码是一个好主意。这对于自定义标记结构来说通常不是那么容易—想想复杂的HTML(以及相关的样式和脚本),有时您不得不写代码来......
  • Git03-git基础
    1、git命令]#git--helpusage:git[--version][--help][-C<path>][-c<name>=<value>][--exec-path[=<path>]][--html-path][--man-path][--info-......
  • Excel基础学习笔记
    EXCEL快速填充:CTRL+E帮你合并拆分内容想用快速填充附近一定要有数据 快速分析数据:选中目标区域后直接CTRL+Q快速分析能实现多种效果:格式化、图标、汇总、表格、迷......
  • python基础
    1、type()语句 通过type()语句来得到数据的类型,能查看变量中存储的数据类型。 查看的是:变量储存的数据的类型。因为,变量无类型,但是它存储的数据有。 语法:type(被查......