第一招:使用requests库进行HTTP请求
requests库是Python中处理HTTP请求的神器,它让发送请求变得异常简单。
import requests
# 发送GET请求
response = requests.get('https://api.example.com/data')
# 检查请求是否成功
if response.status_code == 200:
print("请求成功!")
data = response.json() # 将响应内容解析为JSON
print(data)
else:
print(f"请求失败,状态码:{response.status_code}")
第二招:解析HTML文档
当我们需要抓取网页中的数据时,经常需要解析HTML文档。这时,BeautifulSoup库就派上用场了。
from bs4 import BeautifulSoup
import requests
url = 'https://example.com'
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
# 查找所有标题
titles = s
标签:高效,请求,Python,抓取,print,import,requests,data,response
From: https://blog.csdn.net/wjianwei666/article/details/145131824