安装:
install scrapy # -i https://pipy.douban.com/simple 不一定好用改其他源码
创建scrapy项目的命令:
scrapy startproject <项目名字>
创建爬虫命令:在项目路径下执行:
`scrapy genspider <爬虫名字> <允许爬取的域名>`
scrapy genspider baidu www.baidu.com
运行项目:
scrapy crawl 爬虫名字 [--nolog]
import scrapy
class BaiduSpider(scrapy.Spider):
name = "baidu" # 运行项目名字
allowed_domains = ["www.baidu.com"] # 域名
start_urls = ["https://www.baidu.com"] # 开发爬取的网页
def parse(self, response): # 运行爬虫
print(response) # response 响应的数据
标签:baidu,www,python,基础,爬虫,scrapy,com,response
From: https://www.cnblogs.com/dhcc/p/18304448