- A+
1、原因 SSL 证书报错
http连接太多没有关闭导致的。
经过一番查询,发现该错误是因为如下:
http的连接数超过最大限制,默认的情况下连接是Keep-alive的,所以这就导致了服务器保持了太多连接而不能再新建连接。
1、ip被封
2、程序请求速度过快。
2、解决方式
(1)time.sleep()
(2)关闭 SSL 验证 verify=False
response = requests.get(fpath_or_url,headers=headers,stream=True, verify=False)
(3) requests默认是keep-alive的,可能没有释放,加参数 headers={'Connection':'close'}
sess = requests.Session()
sess.mount('http://', HTTPAdapter(max_retries=3))
sess.mount('https://', HTTPAdapter(max_retries=3))
sess.keep_alive = False
text = requests.get(self.target_img_url, headers=headers, stream=True, verify=False, timeout=(5,5))
with open(img_files_path, 'wb') as file:
for i in text.iter_content(1024 * 10):
file.write(i)
text.close()
(4) 改变重连次数 requests.adapters.DEFAULT_RETRIES = 5
try:
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36',
}
sess = requests.Session()
sess.mount('http://', HTTPAdapter(max_retries=3))
sess.mount('https://', HTTPAdapter(max_retries=3))
sess.keep_alive = False
text = requests.get(self.target_img_url, headers=headers, stream=True, verify=False, timeout=(5,5))
with open(img_files_path, 'wb') as file:
for i in text.iter_content(1024 * 10):
file.write(i)
text.close()
except Exception as e:
print(e,self.target_img_url)
(5) 其他 忽略警告信息
3、Reference
python 爬虫:https; HTTPSConnectionPool(host='z.jd.com', port=443)
- 我的微信
- 这是我的微信扫一扫
- 我的微信公众号
- 我的微信公众号扫一扫