python request timeout_Python - aiohttp请求不断超时(Python - aiohttp requests continuously time out)...
Python - aiohttp請求不斷超時(Python - aiohttp requests continuously time out)
我有一個Python程序,它使用aiohttp和ElementTree從網(wǎng)站獲取數(shù)據(jù)。 下面的代碼是Raspberry Pi上托管的Discord聊天機(jī)器人的一部分。 該功能在大多數(shù)情況下運(yùn)行良好,但是在機(jī)器人開啟幾天之后,該功能開始停滯并且總是超時。 重新啟動程序并不能解決問題,只有重啟Pi才能解決問題。 我知道這并不是很多,但是這段代碼有什么明顯的問題可以解決這個問題,或者問題出在別的什么地方?
import lxml.etree as ET
import aiohttp, async_timeout
...
async with aiohttp.ClientSession() as session:
try:
with async_timeout.timeout(5):
async with session.get('https://example.com', params=params, headers=headers) as resp:
if resp.status == 200:
root = ET.fromstring(await resp.text(), ET.HTMLParser())
# Do stuff with root
else:
print("Error: {}".format(resp.response))
except Exception as e:
print("Timeout error {}".format(e))
I have a Python program that uses aiohttp and ElementTree to fetch data from a website. The code below is a segment of a Discord chat bot hosted on a Raspberry Pi. The function works well most of the time, but after the bot has been on for a few days, the function begins to bog down, and always times out. Restarting the program doesn't fix the issue, only rebooting the Pi seems to solve the problem for a while. I know it's not a lot to go on, but is there an obvious issue with this segment of code that could case this, or does the problem lie somewhere else?
import lxml.etree as ET
import aiohttp, async_timeout
...
async with aiohttp.ClientSession() as session:
try:
with async_timeout.timeout(5):
async with session.get('https://example.com', params=params, headers=headers) as resp:
if resp.status == 200:
root = ET.fromstring(await resp.text(), ET.HTMLParser())
# Do stuff with root
else:
print("Error: {}".format(resp.response))
except Exception as e:
print("Timeout error {}".format(e))
原文:https://stackoverflow.com/questions/47215072
更新時間:2020-02-04 11:44
最滿意答案
也許內(nèi)存泄漏緩慢地占用了系統(tǒng)內(nèi)存,一旦所有內(nèi)容都變得非常緩慢,因為交換被用于內(nèi)存分配并發(fā)生超時。
然而,正如Andrew所說,這不能成為python腳本的問題,或者通過重新啟動它來解決。
監(jiān)視系統(tǒng)內(nèi)存并從那里開始。
Perhaps a memory leak somewhere that slowly uses up systems memory, once full everything becomes very slow as swap is used for memory allocation and timeouts occur.
However as Andrew says this can't be a problem with the python script or it would be fixed by restarting it.
Monitor system memory and go from there.
2017-11-12
相關(guān)問答
response.headers是一個常規(guī)屬性,在調(diào)用之前無需等待 另一方面, asyncio.wait接受期貨和退貨(done, pending)對的列表。 看起來你應(yīng)該用await asyncio.gather(*tasks)替換await wait()調(diào)用( 收集doc ) response.headers is a regular property, no need to put await before the call asyncio.wait on other hand accept
...
代理是錯的。 不同的代理導(dǎo)致不同的錯誤,因此很難找到一個好的代理。 上面的代碼絕對有效(但請更改代理!)。 The proxy was wrong. Different proxies cause different errors, so it was hard to find one fine proxy. The code above is absolutely valid (but pls change the proxy!).
變量d包含對字典的引用(“指針”)。 text.append(d)語句只是將相同字典的引用添加到列表中。 因此,在N次迭代之后,您對列表中的d具有N個相同的引用。 如果你改變你循環(huán)成這樣的東西: for ip in ip_list:
d["ip"]=ip
text.append(d)
print(text)
你應(yīng)該在控制臺上看到: [{'ip': '192.168.1.1'}]
[{'ip': '18.9.8.1'}, {'ip': '18.9.8.1'}]
[{'ip'
...
你的測試技術(shù)出了點(diǎn)問題。 針對您的服務(wù)器運(yùn)行wrk工具會提供不同的結(jié)果。 要運(yùn)行的命令: wrk http://127.0.0.1:15000/
服務(wù)器輸出: ======== Running on http://0.0.0.0:15000 ========
(Press CTRL+C to quit)
2016-10-23 14:58:56,447 - webserver - INFO - Request id: hkkrp received - will sleep for 10
201
...
您正在等待單獨(dú)的do_request()調(diào)用。 而不是直接等待它們(在協(xié)程完成之前阻塞它們),使用asyncio.gather()函數(shù)讓事件循環(huán)同時運(yùn)行它們: async def main():
create_database_and_tables()
records = prep_sample_data()[:100]
requests = []
for record in records:
r = Record(record)
...
當(dāng)使用AsyncResolver作為連接的解析器時,我遇到了類似的問題。 它曾經(jīng)是默認(rèn)的解析器,所以它可能是你的情況。 該問題與ipv6的域有關(guān),其中AsyncResolver存在問題,因此解決方案是簡單地將族指定為ipv4地址 conn = aiohttp.TCPConnector(
family=socket.AF_INET,
verify_ssl=False,
)
I had a similar issue when using AsyncResol
...
也許內(nèi)存泄漏緩慢地占用了系統(tǒng)內(nèi)存,一旦所有內(nèi)容都變得非常緩慢,因為交換被用于內(nèi)存分配并發(fā)生超時。 然而,正如Andrew所說,這不能成為python腳本的問題,或者通過重新啟動它來解決。 監(jiān)視系統(tǒng)內(nèi)存并從那里開始。 Perhaps a memory leak somewhere that slowly uses up systems memory, once full everything becomes very slow as swap is used for memory allocatio
...
一般來說,每當(dāng)運(yùn)行事件循環(huán)時,您應(yīng)盡量避免使用線程。 不幸的是, rethinkdb不支持asyncio開箱即用,但它確實(shí)支持Tornado和Twisted框架。 因此,您可以橋接 Tornado和asyncio ,并在不使用線程的情況下使其工作。 編輯 : 正如安德魯所指出的, rethinkdb 確實(shí)支持asyncio 。 在2.1.0之后你可以做到: rethinkdb.set_loop_type("asyncio")
然后在您的Web處理程序中: res = await rethinkd
...
將密鑰傳遞給fetch() ,以使用相應(yīng)的響應(yīng)返回它們: #!/usr/bin/env python
import asyncio
import aiohttp # $ pip install aiohttp
async def fetch(session, key, item, base_url='http://example.com/posts/'):
async with session.get(base_url + item) as response:
retu
...
我也在學(xué)習(xí)它。 我發(fā)現(xiàn)了這個問題https://github.com/hangoutsbot/hangoutsbot/pull/655 。 那么代碼就像這樣 @asyncio.coroutine
def _create_server(self):
app = web.Application(loop=self.loop)
return app
def add_handler(self, url, handler):
self.app.router.add_route('G
...
總結(jié)
以上是生活随笔為你收集整理的python request timeout_Python - aiohttp请求不断超时(Python - aiohttp requests continuously time out)...的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: linux 内存显示括号内字母的含义
- 下一篇: 一个历史遗留问题,引发的linux内存管