首页 > 其他分享 >关于scrapy爬虫的注意事项

关于scrapy爬虫的注意事项

时间:2023-08-02 19:56:39浏览次数:38  
标签:02 depth 爬虫 scrapy 2018 https 注意事项 com

1. 图片下载的设置

class ClawernameSpider(scrapy.Spider):
    # 定制化设置
    custom_settings = {
        'LOG_LEVEL': 'DEBUG',  # Log等级,默认是最低级别debug
        'ROBOTSTXT_OBEY': False,  # default Obey robots.txt rules
        'DOWNLOAD_DELAY': 0,  # 下载延时,默认是0
        'COOKIES_ENABLED': False,  # 默认enable,爬取登录后数据时需要启用
        'DOWNLOAD_TIMEOUT': 25,  # 下载超时,既可以是爬虫全局统一控制,也可以在具体请求中填入到Request.meta中,Request.meta['download_timeout']
        'RETRY_TIMES': 8,
        # ………………
 
        'IMAGES_STORE': r'E:\scrapyFashionbeansPic\images',  # 爬虫下载图片存储位置,没有则新建,已经存在的图片不会再下载
        'IMAGES_EXPIRES': 90,  # 图片过期时间,单位为天
        'IMAGES_MIN_HEIGHT': 100,  # 图片最小尺寸(高度),低于这个高度的图片不会下载
        'IMAGES_MIN_WIDTH': 100,  # 图片最小尺寸(宽度),低于这个宽度的图片不会下载
        'DOWNLOADER_MIDDLEWARES': {  # 下载中间件设置,后面的数字(范围0~999)为优先级,越小越先处理
            'scrapyFashionbeans.middlewares.HeadersMiddleware': 100,
            'scrapyFashionbeans.middlewares.ProxiesMiddleware': 200,
            'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
        },
 
        # ………………
    }


2. 在Request中设置flag,针对性处理每个request
在构造Request时,增加flags这个参数

def start_requests(self):
    for each in keyLst:
        yield scrapy.Request(
            url = f'https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords={quote(each)}',
            meta = {'key': each, 'dont_redirect': True},
            callback = self.parse,
            errback = self.error,
            # 在request中埋入flag,在经过中间件时,可以以此为判断依据,针对性处理每条Request
            flags = [1]           
        )

使用在下载中间件的示例:
# 在下载中间件中,代理处理部分
# 当Request的flag[0]设置为1时,不添加代理IP

class ProxiesMiddleware(object):
    def __init__(self):
        runTimer = datetime.datetime.now()
        print(f"instance ProxiesMiddleware, startProxyTimer, runTimer:{runTimer}.")
        timerUpdateProxies()
        print(f"instance ProxiesMiddleware, startProxyTimer, runTimer:{runTimer}.")
 
    def process_request(self, request, spider):
        print('Using ProxiesMiddleware!')
 
        # 在这里识别 request.url是不是指列表页,从而不启用代理。
        # 或者在发送列表页request时,将某个栏位(flags可用,类型是列表)置上标记,在这个地方检察这个标记,从而决定要不要启动代理。
        if request.flags:
            if request.flags[0] == 1:   # flags[0] 如果为1表示这条request并不需要添加代理
                return None             # 不加proxy, 直接返回, 给其他中间件继续处理request
 
        if not request.meta.get('proxyFlag'):
            request.meta['proxy']='http://xxxxxxxxxxxx:[email protected]:9020'

3. 跟进一个网页中的超链接

for page in range(1 ,pageNum + 1):
    # 从本网页实现页面跟进
    # href是从网页中拿到的超链接地址
    yield response.follow(
        url = re.sub(r'page=\d+', f'page={page}', href, count = 1),
        meta = {'dont_redirect':True , 'key':response.meta['key']},
        callback = self.galance,
        errback = self.error
    )  

4. 利用redis进行网页去重,做分布式爬虫的基础。
关于scrapy-redis的原理,请参考:https://www.biaodianfu.com/scrapy-redis.html
参考文章:https://www.cnblogs.com/zjl6/p/6742673.html

class ClawernameSpider(scrapy.Spider):
    # 定制化设置
    custom_settings = {
        'LOG_LEVEL': 'DEBUG',  # Log等级,默认是最低级别debug
        # ………………
        # 利用redis对网页去重的设置项
        'DUPEFILTER_CLASS': "scrapy_redis.dupefilter.RFPDupeFilter",
        'SCHEDULER': "scrapy_redis.scheduler.Scheduler",
        'SCHEDULER_PERSIST': False,  # Don't cleanup redis queues, allows to pause/resume crawls.
 
        # ………………
    }

运行redis-cli.exe,然后执行flushdb *清除掉redis中的记录,以免之前没爬取成功的页面,在下次爬取时被忽略掉了。
keys *
flushdb
OK

补充说明:运行redis-cli.exe,执行key *可以看到这个数据库中所有表的名字。

5. scrapy定时关闭
假设有如下需求:指定一个爬虫,每天运行一次,但是需要保证这个爬虫的运行时间不能超过24小时。
对于scrapy框架来说,非常简单,只需要配置一个扩展就好了,打开settings.py,添加一个配置项:

CLOSESPIDER_TIMEOUT = 86400   # 24小时*3600秒 = 86400


CLOSESPIDER_TIMEOUT说明:
CLOSESPIDER_TIMEOUT 的默认值: 0
一个整数值,单位为秒。如果一个spider在指定的秒数后仍在运行, 它将以 closespider_timeout 的原因被自动关闭。 如果值设置为0(或者没有设置),spiders不会因为超时而关闭。
相关的扩展还有很多,比如可以配置获得了一定数量的item则退出等等,详见文档扩展(Extensions):http://scrapy-chs.readthedocs.io/zh_CN/0.24/topics/extensions.html

CLOSESPIDER_TIMEOUT(秒):在指定时间过后,就终止爬虫程序.
CLOSESPIDER_ITEMCOUNT:抓取了指定数目的Item之后,就终止爬虫程序.
CLOSESPIDER_PAGECOUNT:在收到了指定数目的响应之后,就终止爬虫程序.
CLOSESPIDER_ERRORCOUNT:在发生了指定数目的错误之后,就终止爬虫程序.

6. scrapy最大爬取深度depth_limit
有时候,有些奇怪的页面会一直循环跳转,导致爬虫一直不能结束,所以在爬取过程中,需要指定最大深度。
但是还有个非常重要的点需要注意:就是retry_times 和 depth_limit之间的关系。
因为在retry的过程中,会累计 depth,当超过 depth_limit 时,这个页面就会被抛弃掉。(注意,不是指爬虫结束)

# settings
'RETRY_TIMES': 8,
'DEPTH_LIMIT': 2,
 
# Log
E:\Miniconda\python.exe E:/PyCharmCode/allCategoryGet_2/main.py
2018-02-05 18:07:22 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: allCategoryGet_2)
2018-02-05 18:07:22 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'allCategoryGet_2', 'NEWSPIDER_MODULE': 'allCategoryGet_2.spiders', 'SPIDER_MODULES': ['allCategoryGet_2.spiders']}
2018-02-05 18:07:22 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2018-02-05 18:07:23 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'allCategoryGet_2.middlewares.ProxiesMiddleware',
 'allCategoryGet_2.middlewares.HeadersMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-02-05 18:07:23 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-02-05 18:07:23 [requests.packages.urllib3.connectionpool] DEBUG: Starting new HTTP connection (1): api.goseek.cn
2018-02-05 18:07:23 [requests.packages.urllib3.connectionpool] DEBUG: http://api.goseek.cn:80 "GET /Tools/holiday?date=20180205 HTTP/1.1" 200 None
2018-02-05 18:07:23 [scrapy.middleware] INFO: Enabled item pipelines:
['allCategoryGet_2.pipelines.MongoPipeline']
2018-02-05 18:07:23 [scrapy.core.engine] INFO: Spider opened
2018-02-05 18:07:23 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-02-05 18:07:23 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-02-05 18:07:44 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.amazon.com/gp/site-directory> (failed 1 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2018-02-05 18:07:46 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/gp/site-directory> (referer: https://www.amazon.com)
parseCategoryIndexPage: url = https://www.amazon.com/gp/site-directory, status = 200, meta = {'dont_redirect': True, 'download_timeout': 30.0, 'proxy': 'http://proxy.abuyun.com:9020', 'download_slot': 'www.amazon.com', 'retry_times': 1, 'download_latency': 1.7430000305175781, 'depth': 0}
2018-02-05 18:07:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/gp/site-directory> (referer: https://www.amazon.com)
parseCategoryIndexPage: url = https://www.amazon.com/gp/site-directory, status = 200, meta = {'dont_redirect': True, 'download_timeout': 30.0, 'proxy': 'http://proxy.abuyun.com:9020', 'download_slot': 'www.amazon.com', 'retry_times': 1, 'download_latency': 1.1499998569488525, 'depth': 1}
………………………………………………………………
………………………………………………………………
………………………………………………………………
………………………………………………………………
parseSecondLayerForward: follow nextCategoryElem = {'oriName': 'Muffin & Cupcake Pans', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Bakeware'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_0/132-5073023-0203563?node=289668&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_0_12/132-5073023-0203563?node=289700&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2}
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_4/132-5073023-0203563?node=289675&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_5/132-5073023-0203563?node=289679&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_6/132-5073023-0203563?node=8614937011&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_7/132-5073023-0203563?node=2231404011&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_8/132-5073023-0203563?node=289727&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_9/132-5073023-0203563?node=3736941&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_10/132-5073023-0203563?node=2231407011&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_11/132-5073023-0203563?node=289696&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_12/132-5073023-0203563?node=289700&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_13/132-5073023-0203563?node=289701&ie=UTF8&qid=1517825504 
parseSecondLayerForward: ###secondCategoryElem = {'oriName': 'Pie, Tart & Quiche Pans', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Bakeware'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_0/132-5073023-0203563?node=289668&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_0_13/132-5073023-0203563?node=289701&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2}
………………………………………………………………
………………………………………………………………
………………………………………………………………
………………………………………………………………
parseSecondLayerForward: ###secondCategoryElem = {'oriName': 'Popover Pans', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Bakeware'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_0/132-5073023-0203563?node=289668&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_0_15/132-5073023-0203563?node=5038552011&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2}
parseSecondLayerForward: follow nextCategoryElem = {'oriName': 'Popover Pans', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Bakeware'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_0/132-5073023-0203563?node=289668&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_0_15/132-5073023-0203563?node=5038552011&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2}
parseSecondLayerForward: follow nextCategoryElem = {'oriName': 'Wine Accessory Sets', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Wine Accessories'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_14/132-5073023-0203563?node=13299291&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_14_8/132-5073023-0203563?node=13299321&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2}
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_14_7/132-5073023-0203563?node=289737&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_14_8/132-5073023-0203563?node=13299321&ie=UTF8&qid=1517825504 
2018-02-05 18:07:51 [scrapy.core.engine] INFO: Closing spider (finished)
2018-02-05 18:07:51 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
 'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 1,
 'downloader/request_bytes': 1414,
 'downloader/request_count': 4,
 'downloader/request_method_count/GET': 4,
 'downloader/response_bytes': 199923,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 2, 5, 10, 7, 51, 461602),
 'log_count/DEBUG': 186,
 'log_count/INFO': 8,
 'request_depth_max': 2,
 'response_received_count': 3,
 'retry/count': 1,
 'retry/reason_count/twisted.web._newclient.ResponseNeverReceived': 1,
 'scheduler/dequeued': 4,
 'scheduler/dequeued/memory': 4,
 'scheduler/enqueued': 4,
 'scheduler/enqueued/memory': 4,
 'start_time': datetime.datetime(2018, 2, 5, 10, 7, 23, 282602)}
2018-02-05 18:07:51 [scrapy.core.engine] INFO: Spider closed (finished)


7. 关于参数 dont_redirect
dont_redirect:指这个请求是否允许重定向。默认值为:False,允许重定向

导致网页重定向的原因,一般有如下几个:第一,网页自身就是一个跳转页面。
第二,这个网页分为电脑版和移动版,站点会根据用户的访问数据(user-agent)来决定返回给用户哪种网页。

# 使用示例:
# https://www.1688.com/
# user-agent = "MQQBrowser/26 Mozilla/5.0 (Linux; U; Android 2.3.7; zh-cn; MB200 Build/GRJ22; CyanogenMod-7) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1"
 
yield Request(
    url = "https://www.1688.com/",
    # 默认为False
    meta={},
    # 如果将这个参数置为True,表示不允许网页重定向,这样如果网页发生了跳转,那么网页爬不下来
    # meta = {'dont_redirect': True},
    callback = self.parseCategoryIndex,
    errback = self.error
)

默认值:False,允许重定向

2018-02-06 16:57:46 [scrapy.core.engine] INFO: Spider opened
2018-02-06 16:57:46 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-02-06 16:57:46 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-02-06 16:57:47 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://m.1688.com/touch/?src=desktop> from <GET https://www.1688.com/>
2018-02-06 16:57:50 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://m.1688.com?src=desktop> from <GET http://m.1688.com/touch/?src=desktop>
2018-02-06 16:57:52 [scrapy.core.engine] DEBUG: Crawled (400) <GET http://m.1688.com?src=desktop> (referer: https://www.1688.com)
error = [Failure instance: Traceback: <class 'scrapy.spidermiddlewares.httperror.HttpError'>: Ignoring non-200 response


修改为:True,不允许重定向

2018-02-06 17:08:08 [scrapy.core.engine] DEBUG: Crawled (301) <GET https://www.1688.com/> (referer: https://www.1688.com)
2018-02-06 17:08:08 [scrapy.core.engine] INFO: Closing spider (finished)

特别注意:一般情况下,我们都会把这个参数置为True,不允许跳转。因为在一开始,就必须明确你需要解析的页面的结构。后面的解析方法也是针对目标页面的,如果可以随意重定向,那后面的解析也就变得灵活,并且无法精准预测。
8. 关于参数 dont_filter
dont_filter:指这个请求是否允许过滤。默认值为:False,参与过滤。
因为在scrapy中,自带url过滤功能,如果 dont_filter == False,表明这个Request (不仅仅指url)只会被使用一次。当然,如果发生了retry,是不算的。
唯一需要注意的是:如果有些Request会出现重复多次访问,需要在Request中,将这个参数置为 True

原文链接:https://blog.csdn.net/Ren_ger/article/details/85067419

标签:02,depth,爬虫,scrapy,2018,https,注意事项,com
From: https://www.cnblogs.com/tjp40922/p/17601603.html

相关文章

  • 聊城高新技术企业认定申报注意事项
    聊城高新技术企业认定申报注意事项1、知识产权Ⅱ类知识产权数量至少5件以上、Ⅰ类知识产权至少1件以上知识产权数量和质量双达标。知识产权是高企申报的重要条件,同时也是认定评审的主要得分项。知识产权(自主研发、转让、授让)方面的工作需要本年度完成。Ⅰ类知识产权:发明专利(含国......
  • Python爬虫爬取B站评论区
    写了两天,参考其他大牛的文章,摸着石头过河,终于写出了一个可以爬B站评论区的爬虫,人裂了……致谢:致谢:SmartCrane马哥python说该程序具有以下功能:①输入B站视频链接,即可爬取B站评论区评论、IP、ID、点赞数、回复数,并保存在当前目录的以视频名字为标题的csv文件中。②由视频链......
  • scrapy源码分析:redis分布式爬虫队列中,priority值越大,优先级越高
    scrapy源码分析:redis分布式爬虫队列中,priority值越大,优先级越高一、背景scrapy爬虫项目中,遇到scrapy的priority属性,搞不懂priority的值越大优先级越高,还是值越小优先级越高#通过priority修改优先级returnscrapy.Request(url=request.url,dont_filter=True,callback=spider......
  • 代码格式有哪些注意事项
    提问代码格式有哪些注意事项回答垂直格式:代码行数别太多横向格式:代码别太宽......
  • SQL语句使用group by时注意事项
    1、groupby语句用来与聚合函数(COUNT、SUM、AVG、MIN、MAX)联合使用得到一个列或多个列2、having只能在groupby之后(即使用having的前提条件是分组)3、如果过使用where和having,那么where在前4、当一个语句同时出现where、groupby、having、orderby的时候,执行顺序和编写顺序......
  • Python爬虫入门
    前言网页构成首先介绍一个网页的基本构成:HTML负责网页的结构,CSS负责样式的美化,Javascript负责交互逻辑。HTMLCSSJavascript点击F12打开开发者工具(部分电脑可能为Fn+F12),使用元素选择工具,再将鼠标指针移动到任意网页元素,单击该元素则该元素对应的网页源代码会被选中。......
  • 最全淘宝API接口,爬虫数据
    淘宝API接口列表接口地址item_get获得淘宝商品详情item_get_app获得淘宝app商品详情原数据item_get_pro获得淘宝商品详情高级版item_review获得淘宝商品评论item_sku获取sku详细信息item_fee获得淘宝商品快递费用item_password获得淘口令真实urlcat_get获得淘宝分类详情item_cat_ge......
  • python采集爬虫数据,API接口调用获取淘宝天猫,拼多多,1688等平台商品详情
    Python技术爬虫(又称为网页蜘蛛,网络机器人,在FOAF社区中间,更经常的称为网页追逐者);它是一种按照一定的规则,自动地抓取网络信息的程序或者脚本。如果我们把互联网比作一张大的蜘蛛网,那一台计算机上的数据便是蜘蛛网上的一个猎物,而爬虫程序就是一只小蜘蛛,他们沿着蜘蛛网抓取自己想要的......
  • 傻瓜式教程之超详细Scrapy设置代理IP方法!
     大家好呀,今天我们来聊聊如何在Scrapy中超详细地设置代理IP。作为HTTP代理产品供应商,我们深知代理IP在爬虫工作中的重要性。废话不多说,让我们分享一套简单、易懂的设置方法,帮助你轻松爬取数据,告别被封IP的烦恼! 步骤一:安装Scrapy和依赖库 首先,确认你已经安装好Scrapy和相关......
  • wsl2中R语言使用注意事项
    wsl2中R语言使用注意事项1.在archlinux中使用命令sudopacman-Sr安装R语言2.使用部分安装包的时候会报错fortran相关库缺失,可以使用命令sudopacman-Sgcc-fortran安装相应的库3.当初次运行install.package()的时候会调用chooseCranMirror()函数,这个函数会调用本机的图形接......