首页 > 其他分享 >拉钩网爬虫

拉钩网爬虫

时间:2022-10-03 16:34:05浏览次数:35  
标签:url 22% 爬虫 拉钩 item scrapy div response

【lg.py】

import scrapy
import test1.items


class LgSpider(scrapy.Spider):
    name = 'lg'
    # 允许爬取的域
    allowed_domains = ['lagou.com']
    # 爬虫入口
    start_urls = [
        'https://www.lagou.com/wn/jobs?tagCodeList=200001&gx=%E5%85%A8%E8%81%8C&yx=15k-25k&xl=%E6%9C%AC%E7%A7%91&city=%E6%88%90%E9%83%BD&pn=1']
    # 页码数
    page_num = 1
    max_page_num = 10

    def nex_url(self):
        """
        获取下一页url
        :return:
        """
        url = LgSpider.start_urls[0].split("pn=")[0] + "pn=" + str(LgSpider.page_num)
        LgSpider.page_num += 1
        if LgSpider.page_num > LgSpider.max_page_num:
            return None
        return url

    def parse(self, response):
        url = self.nex_url()
        # 循环翻页
        while url is not None:
            yield scrapy.Request(url=url, callback=self.parse_item)

    def parse_item(self, response):
        """
        解析项目列表信息
        :param response:
        :return:
        """
        size = response.xpath('//*[@id="jobList"]/div[1]/div/div[1]/div[1]/div[1]/a').__len__()
        for i in range(size):
            item = test1.items.LgItem()
            item['url'] = response.url
            item['title'] = response.xpath('//*[@id="jobList"]/div[1]/div/div[1]/div[1]/div[1]/a')[i].re(
                "<a>(.*)<!-- -->\\[(.*)\\]</a>")
            item['money'] = response.xpath('//*[@id="jobList"]/div[1]/div/div[1]/div[1]/div[2]/span/text()')[i].get()
            item['time'] = response.xpath('//*[@id="jobList"]/div[1]/div/div[1]/div[1]/div[1]/span/text()')[i].get()
            item['company'] = response.xpath('//*[@id="jobList"]/div[1]/div/div[1]/div[2]/div[1]/a/text()')[i].get()
            item['experience'] = response.xpath('//*[@id="jobList"]/div[1]/div/div[1]/div[1]/div[2]/text()')[i].get()
            item['require'] = response.xpath('//*[@id="jobList"]/div[1]/div/div[2]/div[1]')[i].re("<span>(.*?)</span>")
            positionId = self.get_detail_url(response, i)
            detail_url = "https://www.lagou.com/wn/jobs/{}.html".format(positionId)
            item['detail_url'] = detail_url
            req = scrapy.Request(url=detail_url, callback=self.parse_detail_item)
            # 将列表页项目数据带到详情页响应去
            req.meta['data'] = item
            yield req

    def parse_detail_item(self, response):
        """解析项目详情信息"""
        # 解析列表页带进来的数据
        item = response.meta['data']
        # 绑定详情页数据
        item['fl'] = response.xpath('//*[@id="job_detail"]/dd[1]/p/text()').get()
        item['describe'] = ''.join(response.xpath('//*[@id="job_detail"]/dd[2]/div/text()').getall())
        yield item

    def get_detail_url(self, response, num):
        """
        获取详情页的url地址
        :param response:
        :param num:
        :return:
        """
        positionIds = response.xpath('//*[@id="__NEXT_DATA__"]').re('{"positionId":(\\d+),"positionName":')
        return positionIds[num]

【items.py】

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy

class LgItem(scrapy.Item):
    title = scrapy.Field()
    money = scrapy.Field()
    time = scrapy.Field()
    company = scrapy.Field()
    experience = scrapy.Field()
    require = scrapy.Field()
    url = scrapy.Field()
    detail_url = scrapy.Field()
    fl = scrapy.Field()
    describe = scrapy.Field()

【settings.py】

# Scrapy settings for test1 project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'test1'

SPIDER_MODULES = ['test1.spiders']
NEWSPIDER_MODULE = 'test1.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
    'Accept-Encoding': 'gzip, deflate, br',
    'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8',
    'Cache-Control': 'max-age=0',
    'Connection': 'keep-alive',
    'Host': 'www.lagou.com',
    'sec-ch-ua': '" Not A;Brand";v="99", "Chromium";v="98", "Google Chrome";v="98"',
    'sec-ch-ua-mobile': '?0',
    'sec-ch-ua-platform': '"Windows"',
    'Sec-Fetch-Dest': 'document',
    'Sec-Fetch-Mode': 'navigate',
    'Sec-Fetch-Site': 'none',
    'Sec-Fetch-User': '?1',
    'Upgrade-Insecure-Requests': '1',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36',
    'Cookie': 'LGUID=20200729122826-f23d2554-37c9-4eea-a0f1-76305157298e; _ga=GA1.2.724351898.1595996909; JSESSIONID=ABAAABAABEIABCI7CA522728C67E4E0F7C750CD9762065C; WEBTJ-ID=20221003124510-1839c29875366d-0970d6698ff7f2-a3e3164-921600-1839c29875488c; RECOMMEND_TIP=true; user_trace_token=20221003124510-b8f80e95-b5ed-4555-af88-571ceaa841e4; _gid=GA1.2.224750483.1664772313; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1664772313; privacyPolicyPopup=false; sensorsdata2015session=%7B%7D; index_location_city=%E5%85%A8%E5%9B%BD; X_MIDDLE_TOKEN=24b0e5dcd6b0d6d17b2b43a84356bd47; __lg_stoken__=2937b6ec1f6071d030195f3bbbf9d8d5f08275b1dc96dcb085d84e846d47cc8777eac632a56ad7baa1d35de9ec2dd5989fac1d31ec7adfd20d8cf8714e1759daf038302bec59; TG-TRACK-CODE=index_navigation; gate_login_token=b53a8d18c988653d08776aa19aeca43d7750120c85aac2f4b4e8e61ffb89ca45; _putrc=3219144C4F893ADF123F89F2B170EADC; login=true; unick=%E8%B4%BE%E7%BF%B0%E6%9E%97; showExpriedIndex=1; showExpriedCompanyHome=1; showExpriedMyPublish=1; hasDeliver=7; __SAFETY_CLOSE_TIME__17642295=1; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1664779467; LGRID=20221003144425-b0a48fc2-bdee-4e10-9448-588be0be6020; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2217642295%22%2C%22%24device_id%22%3A%2217398d42f6951f-0fe90ec8ff83a2-f7d123e-2073600-17398d42f6a71c%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24os%22%3A%22Windows%22%2C%22%24browser%22%3A%22Chrome%22%2C%22%24browser_version%22%3A%2298.0.4758.102%22%7D%2C%22first_id%22%3A%2217398d42f6951f-0fe90ec8ff83a2-f7d123e-2073600-17398d42f6a71c%22%7D; X_HTTP_TOKEN=8d8fc241936f5e254893874661b866ea1ca75031b2'
}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
    'test1.middlewares.Test1SpiderMiddleware': 543,
}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'test1.middlewares.Test1DownloaderMiddleware': 543,
# }

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
# }

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'test1.pipelines.Test1Pipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

标签:url,22%,爬虫,拉钩,item,scrapy,div,response
From: https://www.cnblogs.com/hhddd-1024/p/16750678.html

相关文章

  • 盘点一个Python抓取有道翻译爬虫中的报错问题
    大家好,我是皮皮。一、前言前几天在Python白银交流群【斌】问了一个Python网络爬虫的问题,提问截图如下:报错截图如下:粉丝需要的数据如下:二、实现过程有道翻译之前有做过很多,确......
  • 一个爬虫使用教程
    前言用了\(GitHub\)上一个项目作为载体,该项目中有些代码需要修改https://github.com/dataabc/weibo-search/安装python准备工作进入\(weibo-search-master\)......
  • 爬虫入门
    第一篇:爬虫基本原理第二篇:请求库之requests、selenium第三篇:解析库之beautifulsoup第四篇:存储库之MongoDB、redis第五篇:爬虫高性能相关第六篇:爬虫辅助相关第七篇:Scra......
  • Python爬虫--Requests 库用法大全
    昨晚分享了Python爬虫的基本知识,本文分享一下爬虫里面请求相关的内容:Requests用法。往期知识回顾:​​Python爬虫基本原理​​​​12.奇怪知识(1)--Matlab爬虫获取王者荣耀......
  • Python爬虫基本原理
    1、爬虫是什么爬虫是模拟用户在浏览器或者某个应用上的操作,把操作的过程、实现自动化的程序。当我们在浏览器中输入一个url后回车,后台会发生什么?比如说输入http://www.sina.......
  • Python爬虫详解
    1、任务介绍需求分析爬取豆瓣电影Top250的基本信息,包括电影的名称,豆瓣评分,评价数,电影概况,电影链接等。https://movie.douban.com/top2502、基本流程2.1、准备工作通......
  • 几个好用简单的爬虫反爬技术
    python爬虫爬取数据的过程很简单,只要几行代码就可以实现,但并不是所有的网站都希望能够被爬虫所访问。那么基于这个需求就出现各种各样的反爬技术和措施,今天我们就介绍......
  • 提高爬虫采集效率
    很多爬虫工作者都遇到过抓取很慢的情况,尤其是采集数据量很大的情况。如何提高爬虫采集效率就非常关键,以下是提高爬虫采集效率的一些方法:1、减少网站访问次数单次爬虫的时间......
  • B站学爬虫 梨视频ajax双重抓包
    梨视频ajax双重抓包B站学爬虫记录页面抓包这个页面下拉到底会刷出24个新视频,这是ajax随机加载的。下拉到底抓到数据查看数据包,请求为http://www.pearvideo.com/pano......
  • 【Python】【爬虫】【问题解决方案记录】调试输出存在数据,print在控制台确丢失数据
    如下图,调试可以看到数据是完整的但是print输出的,恰好丢失了中间的一大堆数据。对,下图打问号的地方应该是小说才对。看代码可能看不出缺失内容,可视化看看对吧,......