Replies: 24 comments 3 replies
-
Hi @PLNech I am developing my own API using FastAPI and ran into the same "problem" as I am trying to add a global timeout to all my requests. I am still new to fastapi but from what I understand I believe the "fastapi" way to do so would be to use a middleware as they are designed to be ran at every request by nature. As I searched on how to do so I found this I am going to implement both your solution and the middleware based one and see which one I prefer and works best. Also note that there seems to be a problem with starlette 0.13.3 and higher so keep that in mind. Also if you found a workaround by now I am more than interested. Hope it helped you a bit |
Beta Was this translation helpful? Give feedback.
-
Hi @ZionStage, thanks for your message! I haven't found a workaround for now. Looking forward to continuing this conversation with you as we move forward on this topic :) |
Beta Was this translation helpful? Give feedback.
-
Hey @PLNech I have implemented and tested the middleware and it seems to be working fine for me. Here is my code import asyncio
import time
import pytest
from fastapi import FastAPI, Request, Response, HTTPException
from fastapi.responses import JSONResponse
from httpx import AsyncClient
from starlette.status import HTTP_504_GATEWAY_TIMEOUT
REQUEST_TIMEOUT_ERROR = 1 # Threshold
app = FastAPI() # Fake app
# Creating a test path
@app.get("/test_path")
async def route_for_test(sleep_time: float) -> None:
await asyncio.sleep(sleep_time)
# Adding a middleware returning a 504 error if the request processing time is above a certain threshold
@app.middleware("http")
async def timeout_middleware(request: Request, call_next):
try:
start_time = time.time()
return await asyncio.wait_for(call_next(request), timeout=REQUEST_TIMEOUT_ERROR)
except asyncio.TimeoutError:
process_time = time.time() - start_time
return JSONResponse({'detail': 'Request processing time excedeed limit',
'processing_time': process_time},
status_code=HTTP_504_GATEWAY_TIMEOUT)
# Testing wether or not the middleware triggers
@pytest.mark.asyncio
async def test_504_error_triggers():
# Creating an asynchronous client to test our asynchronous function
async with AsyncClient(app=app, base_url="http://test") as ac:
response = await ac.get("/test_path?sleep_time=3")
content = eval(response.content.decode())
assert response.status_code == HTTP_504_GATEWAY_TIMEOUT
assert content['processing_time'] < 1.1
# Testing middleware's consistency for requests having a processing time close to the threshold
@pytest.mark.asyncio
async def test_504_error_consistency():
async with AsyncClient(app=app, base_url="http://test") as ac:
errors = 0
sleep_time = REQUEST_TIMEOUT_ERROR*0.9
for i in range(100):
response = await ac.get("/test_path?sleep_time={}".format(sleep_time))
if response.status_code == HTTP_504_GATEWAY_TIMEOUT:
errors += 1
assert errors == 0
# Testing middleware's precision
# ie : Testing if it triggers when it should not and vice versa
@pytest.mark.asyncio
async def test_504_error_precision():
async with AsyncClient(app=app, base_url="http://test") as ac:
should_trigger = []
should_pass = []
have_triggered = []
have_passed = []
for i in range(200):
sleep_time = 2 * REQUEST_TIMEOUT_ERROR * random.random()
if sleep_time < 1.1:
should_pass.append(i)
else:
should_trigger.append(i)
response = await ac.get("/test_path?sleep_time={}".format(sleep_time))
if response.status_code == HTTP_504_GATEWAY_TIMEOUT:
have_triggered.append(i)
else:
have_passed.append(i)
assert should_trigger == have_triggered I created three tests, the first one is designed to see wether or not the middleware actually does its job. As far as I am concerned the first two tests passed without a problem.
This is the issue mentioned in the thread. I'll downgrade to starlette 0.13.2 and see if the test pass. I might have made some mistakes or overlooked some things so I you ever have the chance to do some tests on your end let me know. Cheers ! Note : |
Beta Was this translation helpful? Give feedback.
-
@PLNech have you tried changing the timeout settings for gunicorn? By default it times out after 60 sec I believe but you can overwrite the settings. https://docs.gunicorn.org/en/latest/settings.html#timeout |
Beta Was this translation helpful? Give feedback.
-
@ZionStage: thanks for sharing your implementation, this looks promising! I'll make some room in our backlog to give it a try in our next sprint and will let you know how it goes :) |
Beta Was this translation helpful? Give feedback.
-
@thomas-maschler: thanks for the advice. Unfortunately I've tried using Gunicorn's |
Beta Was this translation helpful? Give feedback.
-
Thanks for the discussion here everyone! Yes, indeed I think the solution would be with a middleware. About the failing tests from @ZionStage, I understand there are no guarantees about sub-second precisions in async/await (I think Python in general). Either way, it would probably be impossible to expect absolute sub-second precision from something on the network. I would test only with integers to be sure. But anyway, I think that's pretty much the right approach. ✔️ |
Beta Was this translation helpful? Give feedback.
-
Assuming the original need was handled, this will be automatically closed now. But feel free to add more comments or create new issues or PRs. |
Beta Was this translation helpful? Give feedback.
-
This is good to return an error message to the user in case of timeout, but is there a way to actually kill the request at the same time so it doesn't keep using resources? |
Beta Was this translation helpful? Give feedback.
-
Bumping this for @MasterScrat's question. Wondering the same thing |
Beta Was this translation helpful? Give feedback.
-
Another bump for @MasterScrat's question |
Beta Was this translation helpful? Give feedback.
-
@lionel-ovaert When raising the error once the time limit has been reached should stop any undergoing processes linked to the request, doesn't it ? |
Beta Was this translation helpful? Give feedback.
-
Expanding on the middleware from @ZionStage , if the router uses non-asyncio blocking functions, it might end up missing the import asyncio
import time
import pytest
from fastapi import FastAPI, Request, Response, HTTPException
from fastapi.responses import JSONResponse
from httpx import AsyncClient
from starlette.status import HTTP_504_GATEWAY_TIMEOUT
import requests
REQUEST_TIMEOUT_ERROR = 1 # Threshold
app = FastAPI() # Fake app
# Creating a test path
@app.get("/test_path")
async def route_for_test(sleep_time: float) -> None:
requests.get('https://i575rbl2mc.execute-api.us-east-1.amazonaws.com/sleep?time=3')
return JSONResponse({}, status_code=200)
# Adding a middleware returning a 504 error if the request processing time is above a certain threshold
@app.middleware("http")
async def timeout_middleware(request: Request, call_next):
try:
start_time = time.time()
return await asyncio.wait_for(call_next(request), timeout=REQUEST_TIMEOUT_ERROR)
except asyncio.TimeoutError:
process_time = time.time() - start_time
return JSONResponse({'detail': 'Request processing time excedeed limit',
'processing_time': process_time},
status_code=HTTP_504_GATEWAY_TIMEOUT)
# Testing wether or not the middleware triggers
@pytest.mark.asyncio
async def test_504_error_triggers():
# Creating an asynchronous client to test our asynchronous function
async with AsyncClient(app=app, base_url="http://test") as ac:
response = await ac.get("/test_path?sleep_time=3")
content = eval(response.content.decode())
assert response.status_code == HTTP_504_GATEWAY_TIMEOUT
assert content['processing_time'] < 1.1 When running, we have that it lasted the whole execution of the router, way more than the timeout set on the middleware, and it returned 200, it bypassed the middleware:
I'm posting here in the hope that somebody either (a) managed to have a good implementation of requests timeout feature working or (b) knows how to make this middleware works even on those situations. |
Beta Was this translation helpful? Give feedback.
-
Even if the router function contains async code it doesn't get interrupted/cancelled with this middleware solution. app = FastAPI()
@app.get("/long_running")
async def long_running():
try:
while True:
print("Running...")
await asyncio.sleep(1)
except asyncio.CancelledError: # This never happens :(
print("Cancelled.")
@app.middleware("http")
async def timeout_middleware(request: Request, call_next):
try:
return await asyncio.wait_for(call_next(request), timeout=3)
except asyncio.TimeoutError:
return JSONResponse({'detail': 'Request processing time exceeded limit'}, 504) @tiangolo shouldn't we hit |
Beta Was this translation helpful? Give feedback.
-
@LMalikov I got the same error. It looks like you need at least two middleware dectorators in the main.py but it's super wierd. For example,
|
Beta Was this translation helpful? Give feedback.
-
Does anyone
Does anyone know why? It's super weird. Basically, you need to have two @app.middleware("http"). Otherwise, the timeout exception won't work. |
Beta Was this translation helpful? Give feedback.
-
Basically, my problem is like this https://stackoverflow.com/questions/74132015/asyncio-wait-for-doesnt-time-out-as-expected |
Beta Was this translation helpful? Give feedback.
-
Same problem as whats noted in the stackoverflow link above. The aysncio timeout is not respected. |
Beta Was this translation helpful? Give feedback.
-
I think this can be fixed using python 3.11 and asyncio.timeout_at instead asyncio.wait_for. |
Beta Was this translation helpful? Give feedback.
-
Workaround: I've created a decorator to use in the endpoints you want to raise a response 504:
Then you can use it in your endpoint:
|
Beta Was this translation helpful? Give feedback.
-
Thanks @Naish21 I really like your solution! signal.alarm(max_execution_time) with signal.setitimer(signal.ITIMER_REAL, max_execution_time) setitimer can work with floating number, so it is possible to define a timeout of 300ms for example ( |
Beta Was this translation helpful? Give feedback.
-
I am afraid this solution will not play well in a concurrent environment since there is only one timer per process, whereas there will be many co-routines running concurrently within the same process. |
Beta Was this translation helpful? Give feedback.
-
So far, I have not seen any satisfactory solution for this problem. And the underlying problem seems to be that we might use functions on the router that are not friendly with |
Beta Was this translation helpful? Give feedback.
-
I am currently using something like this and it seems to work, but I not sure where it could go wrong...
|
Beta Was this translation helpful? Give feedback.
-
First check
Ticked all checks, then first commitment choice
Description
Hi there, first of all many thanks for the work on FastAPI - this is now my goto framework for building Python-based REST APIs :)
My question is about adding a global timeout to any potential request served by the server. My use-case includes occasionally long loading times when I have to load a new model for a given user request, and instead of blocking for 30-50s (which would often timeout on the user side due to default connection timeouts), I would like to return a temporary error whenever any endpoint takes more than a given delay to complete.
Example
Today the only way I found to implement a timeout on every request is to wrap every endpoint method within a context manager like this one:
This is however quite cumbersome to add on every single function decorated as an endpoint.
Besides, it feels hacky: isn't there a better way to define app-level timeouts broadly, with a common handler, maybe akin to how
ValidationErrors
can be managed in a single global handler?Environment
Additional context
I looked into Starlette's timeout support to see if that was handled at a lower level. but to no avail.
Beta Was this translation helpful? Give feedback.
All reactions