首页 > 其他分享 >爬虫

爬虫

时间:2023-03-16 19:33:26浏览次数:49  
标签:res 爬虫 https print import requests com

今日内容

1.requests高级用法

2.代理池搭建

3.爬取某视频网站

4.爬取新闻

1.requests高级用法

发送http请求,返回的数据会有xml格式,也有json格式

import requests
data = {
    'cname': '',
    'pid': '',
    'keyword': '500',
    'pageIndex': 1,
    'pageSize': 10,
}
res = requests.post('http://www.kfc.com.cn/kfccda/ashx/GetStoreList.ashx?op=keyword',data=data)
# print(res.text)  # json 格式字符串---》json.cn
print(type(res.json()))  # 转成对象  字典对象

ssl认证(了解)

http协议:明文传输
https协议:http+ssl/tsl
	HTTP+ SSL / TLS,也就是在 http上又加了一层处理加密信息的模块,比 http安全,可防止数据在传输过程中被窃取、改变,确保数据的完整性
    https://zhuanlan.zhihu.com/p/561907474
        
以后遇到证书提示错误问题 ssl xxx
	1.不验证证书
    import requests
    respone=requests.get('https://www.12306.cn',verify=False) #不验证证书,报警告,返回200
    print(respone.status_code)
    
    2.关闭警告
    import requests
    from requests.packages import urllib3
    urllib3.disable_warnings() #关闭警告
    respone=requests.get('https://www.12306.cn',verify=False)
    print(respone.status_code)
    
    3.手动携带证书(了解)
    import requests
    respone=requests.get('https://www.12306.cn',
                         cert=('/path/server.crt',
                               '/path/key'))
    print(respone.status_code)

使用代理(重要)

如果爬虫使用自身ip地址访问,很有可能被封ip地址,以后就访问不了了

我们可以使用代理ip
代理:收费和免费(不稳定)

# res = requests.post('https://www.cnblogs.com',proxies={'http':'地址+端口'})

# res = requests.post('https://www.cnblogs.com',proxies={'http':'27.79.236.66:4001'})
res = requests.post('https://www.cnblogs.com',proxies={'http':'60.167.91.34:33080'})
print(res.status_code)

高匿代理和透明代理
	高匿,服务端拿不到真实客户端的ip地址
    透明:服务端能拿到真实客户端的ip地址
	
    后端如何拿到真实客户端ip地址
    	http请求头中有个:X-Forwarded-For: client1, proxy1, proxy2, proxy3
        x-forword-for
        获得HTTP请求端真实的IP

超时设置

import requests
respone=requests.get('https://www.baidu.com',timeout=0.0001)

异常处理

import requests
from requests.exceptions import * #可以查看requests.exceptions获取异常类型

try:
    r=requests.get('http://www.baidu.com',timeout=0.00001)
except ReadTimeout:
    print('===:')
# except ConnectionError: #网络不通
#     print('-----')
# except Timeout:
#     print('aaaaa')

except RequestException:
    print('Error')

上传文件

3.上传文件
import requests
files = {'file': open('美女.png', 'rb')}
respone = requests.post('http://httpbin.org/post', files=files)
print(respone.status_code)

2.代理池搭建

requests 发送请求使用代理
代理从哪来
	公司花钱买
    搭建免费的代理池:https://github.com/jhao104/proxy_pool
        python:爬虫+flask写的
        架构:看下图
        
搭建步骤:
	1.git clone https://github.com/jhao104/proxy_pool.git
    2.使用pycharm打开
    3.安装依赖:pip install -r requirements.txt
    4.修改配置文件(redis地址即可)
        HOST = "0.0.0.0"
        PORT = 5010
        DB_CONN = 'redis://127.0.0.1:6379/0'
        PROXY_FETCHER #爬取哪些免费代理网站
   	5.启动爬虫程序
    python proxyPool.py schedule
	6.启动服务端
    python proxyPool.py server
    
    7.使用随机一个免费代理
    地址栏中输入:http://127.0.0.1:5010/get/
        
使用随机代理发送请求
import requests
from requests.packages import urllib3
urllib3.disable_warnings() #关闭警告
获取代理
res = requests.get('http://127.0.0.1:5010/get/').json()
proxies = {}
if res['https']:
    proxies['https'] = res['proxy']
else:
    proxies['http'] = res['proxy']
print(proxies)
res = requests.post('https://www.cnblogs.com', proxies=proxies,verify=False)
# res = requests.post('https://www.cnblogs.com')
print(res)    

image

django后端获取客户端的ip

建立django后端---》index地址---》访问就返回访问者的ip
django代码---》不要忘记改配置文件
路由
path('', index),
视图函数
def index(request):
    ip = request.META.get('REMOTE_ADDR')
    print('ip地址是', ip)
    return HttpResponse(ip)

测试端:

import requests
from requests.packages import urllib3
urllib3.disable_warnings() #关闭警告
获取代理
res = requests.get('http://127.0.0.1:5010/get/').json()
proxies = {}
if res['https']:
    proxies['https'] = res['proxy']
else:
    proxies['http'] = res['proxy']
print(proxies)
res=requests.get('http://101.43.19.239/', proxies=proxies,verify=False)
print(res.text)

from threading import Thread
import requests

def task():
    res = requests.get('http://101.43.19.239/')
    print(res.text)

for i in range(10000000):
    t = Thread(target=task)
    t.start()

3.爬取某视频网站

import requests
import re

res = requests.get('https://www.pearvideo.com/category_loading.jsp?reqType=5&categoryId=1&start=0')
# print(res.text)
解析出真正视频地址
video_list = re.findall('<a href="(.*?)" class="vervideo-lilink actplay">', res.text)
# print(video_list)
for i in video_list:
    # i='video_1212452'
    video_id = i.split('_')[-1]
    real_url = 'https://www.pearvideo.com/' + i
    # print('真正视频地址是:',real_url)
    headers = {
        'Referer': 'https://www.pearvideo.com/video_%s' % video_id
    }
    res1 = requests.get('https://www.pearvideo.com/videoStatus.jsp?contId=%s&mrd=0.29636538326105044' % video_id,
                        headers=headers).json()
    # print(res1["videoInfo"]['videos']['srcUrl'])
    mp4_url = res1["videoInfo"]['videos']['srcUrl']
    mp4_url = mp4_url.replace(mp4_url.split('/')[-1].split('-')[0], 'cont-%s' % video_id)
    print(mp4_url)
    res2 = requests.get(mp4_url)
    with open('./video/%s.mp4' % video_id, 'wb') as f:
        for line in res2.iter_content():
            f.write(line)

headers={
    'Referer': 'https://www.pearvideo.com/video_1212452'
}
res=requests.get('https://www.pearvideo.com/videoStatus.jsp?contId=1212452&mrd=0.29636538326105044',headers=headers)
print(res.text)

https://video.pearvideo.com/mp4/short/20171204/    1678938313577    -11212458-hd.mp4
https://video.pearvideo.com/mp4/short/20171204/     cont-1212452    -11212458-hd.mp4

mp4_url = 'https://video.pearvideo.com/mp4/short/20171204/  1678938313577-11212458-hd.mp4'

4.爬取新闻

import requests
# pip install beautifulsoup4   解析xml的库
from bs4 import BeautifulSoup

res = requests.get('https://www.autohome.com.cn/all/1/#liststart')
# print(res.text)
第一个参数是要解析的文本 str
第二个参数是:解析的解析器  html.parser:内置解析器       lxml:第三方需要额外安装
soup = BeautifulSoup(res.text, 'html.parser')
查找所有类名叫article的ul标签   find_all
ul_list = soup.find_all(name='ul', class_='article')
for ul in ul_list:
    li_list = ul.find_all(name='li')
    # print(len(li_list))
    for li in li_list:
        h3 = li.find(name='h3')
        if h3:  # 不是广告
            title = h3.text
            url = 'https:' + li.find('a').attrs['href']
            desc = li.find('p').text
            img = li.find(name='img').attrs['src']
            print('''
            新闻标题:%s
            新闻连接:%s
            新闻摘要:%s
            新闻图片:%s
            ''' % (title, url, desc, img))

把所有图片下载到本地,把爬完的数据,存到mysql中---》pymysql---》commit

标签:res,爬虫,https,print,import,requests,com
From: https://www.cnblogs.com/yueq43/p/17223891.html

相关文章