函数介绍
函数功能简单介绍
库函数介绍
import requests#请求网页 from lxml import etree#对网页进行解析
函数功能介绍
函数1
def getdata(url): html=requests.get(url).text # print(html) doc=etree.HTML(html)#构造xpath的解析对象 contents=doc.xpath('//*[@class="cf"]/li') # print(contents) for content in contents: links=content.xpath('h2/a/@href') for link in links: hurl="https:"+link#小说某一章的网址 html=requests.get(hurl).text#获取到源代码 doc=etree.HTML(html)#构造xpath解析对象 title=doc.xpath('//*[@class="text-wrap"]/div/div[1]/h3/span[1]/text()') content=doc.xpath('//*[@class="read-content j_readContent"]/p/text()') with open('novel/%s.txt'%title[0],mode='w',encoding='utf-8') as f: for abd in content: f.write(abd)
函数功能比较简单,所以就没有对其中的保存小说的函数进行封装,有兴趣的可以自己尝试一下。
完整代码
#获取起点小说的爬虫程序 #倒推法 import requests from lxml import etree url="https://book.qidian.com/info/1979049/#Catalog"#小说的网址 def getdata(url): html=requests.get(url).text # print(html) doc=etree.HTML(html)#构造xpath的解析对象 contents=doc.xpath('//*[@class="cf"]/li') # print(contents) for content in contents: links=content.xpath('h2/a/@href') for link in links: hurl="https:"+link#小说某一章的网址 html=requests.get(hurl).text#获取到源代码 doc=etree.HTML(html)#构造xpath解析对象 title=doc.xpath('//*[@class="text-wrap"]/div/div[1]/h3/span[1]/text()') content=doc.xpath('//*[@class="read-content j_readContent"]/p/text()') with open('novel/%s.txt'%title[0],mode='w',encoding='utf-8') as f: for abd in content: f.write(abd) a=getdata(url)
函数功能介绍
学习了entree对网页源码进行解析,requests库对网页进行解析获得源码,同时代码中还用到了获取标签xpath的方法,xpath的解析将在下一篇文章进行解析。 标签:xpath,20,python,text,爬虫,content,html,doc,requests From: https://www.cnblogs.com/liuyebai/p/16858285.html