首页 > 其他分享 >scrapy初步使用

scrapy初步使用

时间:2024-09-24 22:20:22浏览次数:9  
标签:self 初步 item scrapy html https 使用 div

setting

# Scrapy settings for demo project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = "demo"

SPIDER_MODULES = ["demo.spiders"]
NEWSPIDER_MODULE = "demo.spiders"


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = "demo (+http://www.yourdomain.com)"

# Obey robots.txt rules
LOG_LEVEL="ERROR"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36"

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
#    "Accept-Language": "en",
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    "demo.middlewares.DemoSpiderMiddleware": 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    "demo.middlewares.DemoDownloaderMiddleware": 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    "scrapy.extensions.telnet.TelnetConsole": None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   "demo.pipelines.DemoPipeline": 300,
   "demo.pipelines.MysqlPipeline": 301,

}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"

# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"

 pipelines

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter


class DemoPipeline:
    fp = None

    def open_spider(self, item):
        self.fp = open('xcf.txt', 'w', encoding='utf-8')
        print('文件被创建')

    def close_spider(self, item):
        self.fp.close()
        print('文件被关闭')

    def process_item(self, item, spider):
        # return item
        href = item['href']
        title = item['title']
        pl = item['pl']
        pc = item['pc']
        # item['num'] = num
        pe = item['pe']
        self.fp.write(href + '  ' + title + '  ' + href + '  ' + pl + '  ' + pc + '  ' + pe + '\n')
        return item


import pymysql


class MysqlPipeline:
    conn = pymysql.Connect(host='127.0.0.1', port=3306, user='root', password='123456', db='xcf')
    cursor = conn.cursor()

    def process_item(self, item, spider):
        href = item['href']
        title = item['title']
        pl = item['pl']
        pc = item['pc']

        pe = item['pe']
        sql = 'insert into txcf values ("%s","%s","%s","%s","%s")' % (href, title, pl, pc, pe)
        self.cursor.execute(sql)
        self.conn.commit()
        return item

    def close_spider(self, item):
        self.cursor.close()
        self.conn.close()

 item

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class DemoItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # pass
    href = scrapy.Field()
    title = scrapy.Field()
    pl = scrapy.Field()
    pc = scrapy.Field()
    # num = scrapy.Field()
    pe = scrapy.Field()
    item = scrapy.Field()
import csv

import scrapy
from scrapy import cmdline
from demo.items import DemoItem

class XcfSpider(scrapy.Spider):
    name = "xcf"
    # allowed_domains = ["www.xiachufang.com"]
    start_urls = []
    # start_urls = ["https://www.xiachufang.com/category/40076/?page=1"]
    for i in range(1,20):
        start_url=f"https://www.xiachufang.com/category/40076/?page={i}"
        start_urls.append(start_url)

    # print(start_urls)
    # start_urls = ["https://www.baidu.com"]
    # def __init__(self, **kwargs):
    #     # CSV文件路径
    #     super().__init__(**kwargs)
        # self.csvfile = open('items.csv', mode='w', newline='', encoding='utf-8')
        # # 创建csv.writer对象
        # self.writer = csv.writer(self.csvfile)
        # # 写入表头(可选)
        # self.writer.writerow(['网址', '菜名', '上传人','评分','七天内尝试人数','配料'])  # 根据你的Item结构调整字段名

    def parse(self, response,page=1):
        # print(response)
        li_list = response.xpath('/html/body/div[3]/div/div/div[1]/div[1]/div/div[2]/div[2]/ul/li')
        for li in li_list:
            href="https://www.xiachufang.com/"+''.join(li.xpath('./div/a/@href').extract())
            title = ''.join(li.xpath('./div/div/p[1]/a/text()').extract()).strip()  #名称
            pl = ' '.join(li.xpath('./div/div/p[2]/a/text()').extract())  #材料
            pc = ' '.join(li.xpath('./div/div/p[3]/span[1]/text()').extract())  #评分
            num = ' '.join(li.xpath('./div/div/p[3]/span[2]/text()').extract())  #七天内尝试人数
            pe = ' '.join(li.xpath('./div/div/p[4]/a/text()').extract())  #上传人
            # item1 = {'网址': href, '菜名': title, '上传人': pe,'评分':pc,'七天内尝试人数':pl,'配料':num}
            # print(item1)
            item=DemoItem()
            item['href']=href
            item['title'] = title
            item['pl'] = pl
            item['pc'] = pc
            # item['num'] = num
            item['pe'] = pe
            yield item
            # self.writer.writerow(list(item1.values()))
            # # '/html/body/div[3]/div/div/div[1]/div[1]/div/div[2]/div[3]/a[4]'
# cmdline.execute("scrapy crawl xcf".split())

标签:self,初步,item,scrapy,html,https,使用,div
From: https://blog.csdn.net/m0_64188466/article/details/142501821

相关文章

  • 网络设置(ip命令使用)
    新安装系统的服务器配置网络,有些不默认支持network的用第二种,(如:RockyLinux8.10)最小安装还没有dhcp客户端dhclient。主要讲支持network服务的。一.配置网络1.最基础的方法(支持network的如:centos)①修改配置文件/etc/sysconfig/network-scripts/ifcfg-*vim/etc/sysconfig/netw......
  • Docker Desktop (WSL)部署MySQL使用Navicat 16 for MySQL远程连接
    DockerDesktop(WSL)部署MySQL使用Navicat16forMySQL远程连接1.docker拉取镜像dockerpullmysql2.查看镜像dockerimages3.启动MySQL实例dockerrun-d-p3307:3306--name=mysql-eMYSQL_ROOT_PASSWORD=123456mysql命令详解参数详解-d在后台运行容......
  • Anaconda的使用命令,方便python的管理
    pythonpython是世界上最好的编程语言(有杠精,你就对。)python的领域涉及了AI,大数据,网络爬虫,运维,开发等等方面。python的环境由解释器和包组成。1、python的解释器Python解释器是Python环境的本体,也就是python.exe文件。我们需要在环境变量的路径中将python.exe所在的目录添加上,这......
  • 如何在不模糊脊线的情况下增强指纹图像?我正在使用 Django
    如何在不模糊脊线的情况下增强指纹图像?我正在使用Django。我目前正在使用OpenCV开发一个指纹增强项目,当我的代码产生结果时,指纹图像中的脊线变得模糊。这是我的代码importcv2importnumpyasnpimportmathimportfingerprint_enhancer#Ensurethismoduleisavai......
  • CCS使用教程(工程导入,连接DSP,烧录)
    如果一些窗口不小心关闭了,可以在CCS状态栏View一栏中重新打开,然后固定即可。一、导入工程在ProjectExplorer的空白区域右键,import—>CCSProject,然后找到CCS工程所在路径,出现如下界面说明路径正确导入成功后,找到主文件,后缀.c,点击编译用的锤子。如果无报错,会出现****B......
  • 关于 collection的基本使用
    importcollectionsimportreprint('-'*130)#['ChainMap','Counter','OrderedDict','UserDict','UserList','UserString','defaultdict','deque','namedtuple&#......
  • 在windows上使用docker创建mysql数据库
    可以以下步骤在Windows上使用Docker创建MySQL数据库:安装Docker:确保Windows上已安装DockerDesktop。拉取MySQL镜像:打开终端,运行以下命令:dockerpullmysql启动MySQL容器:使用以下命令启动一个MySQL容器(替换your_password为你的密码):dockerrun--namemysql-container......
  • 在Windows上使用Docker创建Redis
    在Windows上使用Docker创建Redis并设置密码拉取Redis镜像通过终端执行以下命令来获取Redis的官方镜像:dockerpullredis启动Redis容器并设置密码使用--requirepass选项来设置Redis密码。例如,启动Redis并将密码设置为your_password:dockerrun--nameredis-container-d......
  • 通信的基本概念以及串口和定时器使用
    一.数据传送的方式  串行通讯  速度慢,占用资源少,距离远  并行通讯  速度快,占用资源多二.通信方式  单工通讯    一个固定发送,一个固定接受  半双工通讯    对讲机  全双工通讯    电话三.数据同步方式  同步(有时钟......
  • sqlalchemy的使用
    全称ObjectRelationalMapping(对象关系映射)。特点是操纵Python对象而不是SQL查询,也就是在代码层面考虑的是对象,而不是SQL,体现的是一种程序化思维,这样使得Python程序更加简洁易读。具体的实现方式是将数据库表转换为Python类,其中数据列作为属性,数据库操作作为方法。优点:简洁......