首页 > 其他分享 >Scrapy框架进阶攻略:代理设置、请求优化及链家网实战项目全解析

Scrapy框架进阶攻略:代理设置、请求优化及链家网实战项目全解析

时间:2024-08-12 16:42:08浏览次数:12  
标签:进阶 self request Scrapy headers scrapy proxy 链家网 response

scrapy框架

加代理

付费代理IP池

middlewares.py

# 代理IP池
class ProxyMiddleware(object):
    proxypool_url = 'http://127.0.0.1:5555/random'
    logger = logging.getLogger('middlewares.proxy')

    async def process_request(self, request, spider):
        async with aiohttp.ClientSession() as client:
            response = await client.get(self.proxypool_url)
            if not response.status == 200:
                return
            proxy = await response.text()
            self.logger.debug(f'set proxy {proxy}')
            request.meta['proxy'] = f'http://{proxy}'

settings.py

DOWNLOADER_MIDDLEWARES = {
    "demo.middlewares.DemoDownloaderMiddleware": 543,
    "demo.middlewares.ProxyMiddleware": 544
}

隧道代理

import base64

proxyUser = "1140169503666491392"
proxyPass = "7RmCwS8r"
proxyHost = "http-short.xiaoxiangdaili.com"
proxyPort = "10010"

proxyServer = "http://%(user)s:%(pass)s@%(host)s:%(port)s" % {
    "host": proxyHost,
    "port": proxyPort,
    "user": proxyUser,
    "pass": proxyPass
}
proxyAuth = "Basic " + base64.urlsafe_b64encode(bytes((proxyUser + ":" + proxyPass), "ascii")).decode("utf8")


# 隧道代理
class ProxyMiddleware(object):
    def process_request(self, request, spider):
        request.meta["proxy"] = proxyServer
        request.headers["Connection"] = "close"
        request.headers["Proxy-Authorization"] = proxyAuth
        # 60秒一切 变为 10秒一切
        request.headers["Proxy-Switch-Ip"] = True

重试机制

settings.py

# Retry settings
RETRY_ENABLED = False
RETRY_TIMES = 5  # 想重试几次就写几
# 下面这行可要可不要
# RETRY_HTTP_CODES = [500, 502, 503, 504, 408]

重写已有重试中间件

midderwares.py

from scrapy.downloadermiddlewares.retry import RetryMiddleware

retry.py

    def _retry(self, request, reason, spider):
        max_retry_times = request.meta.get("max_retry_times", self.max_retry_times)
        priority_adjust = request.meta.get("priority_adjust", self.priority_adjust)
        # 重试更换代理IP
        proxypool_url = 'http://127.0.0.1:5555/random'
        logger = logging.getLogger('middlewares.proxy')

        async def process_request(self, request, spider):
            async with aiohttp.ClientSession() as client:
                response = await client.get(self.proxypool_url)
                if not response.status == 200:
                    return
                proxy = await response.text()
                self.logger.debug(f'set proxy {proxy}')
                request.meta['proxy'] = f'http://{proxy}'
        request.headers['Proxy-Authorization'] = "proxyauth"
        return get_retry_request(
            request,
            reason=reason,
            spider=spider,
            max_retry_times=max_retry_times,
            priority_adjust=priority_adjust,
        )

零碎知识点

scrapy两种请求方式

  1. GET请求

    import scrapy
    yield scrapy.Request(begin_url,self.first)
    
  2. POST请求

    from scrapy import FormRequest ##Scrapy中用作登录使用的一个包
    formdata = {    'username': 'wangshang',    'password': 'a706486'}
    yield scrapy.FormRequest( 
    url='http://172.16.10.119:8080/bwie/login.do',
    formdata=formdata,   
    callback=self.after_login,
    )
    

    应用场景:POST请求并且携带加密token,我们需要伪造POST请求并且解密token

scrapy个性化配置

settings.py

custom_settings_for_centoschina_cn = {
'DOWNLOADER_MIDDLEWARES' : {
   'questions.middlewares.QuestionsDownloaderMiddleware': 543,
},
'ITEM_PIPELINES': {
   'questions.pipelines.QuestionsPipeline': 300,
},
'MYSQL_URI' : '124.221.206.17',
# 'MYSQL_URI' : '43.143.155.25',
'MYSQL_DB' : 'mydb',
'MYSQL_USER':'root',
'MYSQL_PASSWORD':'123456',

}

爬虫部分

import scrapy
from questions.settings import custom_settings_for_centoschina_cn
from questions.items import QuestionsItem
from lxml import etree
class CentoschinaCnSpider(scrapy.Spider):
    name = 'centoschina.cn'
    # allowed_domains = ['centoschina.cn']
    custom_settings = custom_settings_for_centoschina_cn

3种方式加headers

  1. settings.py的默认headers

    # Override the default request headers:
    DEFAULT_REQUEST_HEADERS = {
       "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
       "Accept-Language": "en",
    }
    
  2. 每个请求加headers

    headers = {
            'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36 Edg/127.0.0.0'
        }
    
        def start_requests(self):
            start_url = "https://2024.ip138.com/"
            for n in range(5):
                # dont_filter=True, 去掉框架自带相同链接去重机制
                yield scrapy.Request(start_url, self.get_info, dont_filter=True, headers=A2024Ip138Spider.headers)
    
  3. 下载器中间件加headers

     def process_request(self, request, spider):
            # Called for each request that goes through the downloader
            # middleware.
            # 加header
            request.headers[
                'user-agent'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36 Edg/127.0.0.0'
            # Must either:
            # - return None: continue processing this request
            # - or return a Response object
            # - or return a Request object
            # - or raise IgnoreRequest: process_exception() methods of
            #   installed downloader middleware will be called
            return None
    
    

优先级:3 > 2 > 1

request携带参数, response获取参数

    def start_requests(self):
        start_url = "https://2024.ip138.com/"
        for n in range(5):
            # dont_filter=True, 去掉框架自带相同链接去重机制
            yield scrapy.Request(start_url, self.get_info, dont_filter=True, headers=A2024Ip138Spider.headers,
                                 meta={'page': 1})

    def get_info(self, response):
        # print(response.text)
        print(response.meta['page'])
        ip = response.xpath('/html/body/p[1]/a[1]/text()').extract_first()
        print(ip)

链家(scrapy项目)

项目介绍:不封IP

核心代码

import scrapy


class TjLianjiaSpider(scrapy.Spider):
    name = "tj_lianjia"

    # allowed_domains = ["ffffffffff"]
    # start_urls = ["https://ffffffffff"]
    def __init__(self):
        self.page = 1

    def start_requests(self):
        start_url = 'https://tj.lianjia.com/ershoufang/pg{}/'.format(self.page)
        yield scrapy.Request(start_url, self.get_info)

    def get_info(self, response):
        lis = response.xpath('//li[@class="clear LOGVIEWDATA LOGCLICKDATA"]')
        for li in lis:
            title = li.xpath('div[1]/div[@class="title"]/a/text()').extract_first()
            totalprice = ''.join(li.xpath('div[1]/div[@class="priceInfo"]/div[1]//text()').extract())
            print(title, totalprice)
        self.page += 1
        next_href = 'https://tj.lianjia.com/ershoufang/pg{}/'.format(self.page)
        yield scrapy.Request(next_href, self.get_info)

更多精致内容

标签:进阶,self,request,Scrapy,headers,scrapy,proxy,链家网,response
From: https://www.cnblogs.com/CodeRealm/p/18355245

相关文章

  • 一文读懂分布式爬虫利器Scrapy-Redis:源码解析、队列管理与去重策略
    分布式利器Scrapy-Redis原理Scrapy-Redis库已经为我们提供了Scrapy分布式的队列、调度器、去重等功能,其GitHub地址为:https://github.com/rmax/scrapy-redis。本节课我们深入掌握利用Redis实现Scrapy分布式的方法,并深入了解Scrapy-Redis的原理。1.获取源码......
  • 一文读懂分布式爬虫利器Scrapy-Redis:源码解析、队列管理与去重策略
    分布式利器Scrapy-Redis原理Scrapy-Redis库已经为我们提供了Scrapy分布式的队列、调度器、去重等功能,其GitHub地址为:https://github.com/rmax/scrapy-redis。本节课我们深入掌握利用Redis实现Scrapy分布式的方法,并深入了解Scrapy-Redis的原理。1.获取源码可以......
  • 手把手教你实现Scrapy-Redis分布式爬虫:从配置到最终运行的实战指南
    1.scrapy-redis的环境准备pipinstallscrapy-redis安装完毕之后确保其可以正常导入使用即可。2.实现接下来我们只需要简单的几步操作就可以实现分布式爬虫的配置了。2.1修改Scheduler在前面的课时中我们讲解了Scheduler的概念,它是用来处理Request、Item等对象的调度......
  • SQL进阶技巧:断点缝合问题【如何按照业务规则对相邻行数据进行合并】
    目录0需求描述1数据准备2数据分析3小结 0需求描述如下图所示,按照定义的规则进行数据变换注意:b中的数值只有0和11数据准备withdataas(select2010 a,0bunionallselect2011 a,1bunionallselect2012 a,0bunionallselect2013 a,1bunionall......
  • 【408DS算法题】009进阶-二维数组的查找
    Index题目实现代码分析题目在一个二维数组array中(每个一维数组的长度相同),每一行都按照从左到右递增的顺序排序,每一列都按照从上到下递增的顺序排序。请完成一个函数,输入这样的一个二维数组和一个整数,判断数组中是否含有该整数。​进阶要求——时间复杂度:......
  • 面向对象--进阶
    static关键字静态修饰符可以修饰成员变量和方法(一般是工具类)特点:1、被类的所有对象共享(比如oa在线办公人数)2、可以通过类名.的方式调用(推荐使用)3、随着类的加载而加载,优先于对象而存在(对象在调用的时候才创建)工具类的方法添加static修饰构造方法私有化(不允许创建对象......
  • 【408DS算法题】进阶011-20年真题_三元组的最小距离
    Index真题题目分析实现总结真题题目定义三元组(a,b,c)(a,b,c均为正数)的距离D=|a-b|+|b-c|+|c-a|给定3个非空整数集合S1、S2和S3,按升序分别存储在3个数组中。设计一个尽可能高效的算法,计算并输出所有可能的三元组(a,b,c)......
  • Scanner的进阶使用——基础计算
    通过Scanner,可以将我们输入的数字进行计算从而反映出和以及平均数1.定义两个变量,分别是输入的整数以及总数的和2.建立一个扫描器3.使用while关键字进行循环,在符合条件下(输入的是数字)可以一直进行计算过程4.设置电脑接收数据5.设置我们输入的次数以及数字的总和6.输出......
  • Scanner的进阶使用——数字的输入
    1.用Scanner输入数字(整数和小数)1.定义一个整数变量2.建立扫描器3.使用if4.建立电脑接收数据5.设置else(那么)语法6.关闭Scanner......
  • 【数据分析---- Pandas进阶指南:核心计算方法、缺失值处理及数据类型管理】
    前言:......