-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Breaking change in 0.23.* #706
Comments
I confirm, we use similar setup with 2 session level fixtures (one to redefine event loop, another for our own purposes), tests don't work anymore, complain either about "The future belongs to a different loop than the one specified as the loop argument" or "Event loop is closed". |
The version 0.23.0 changelog is already mentioning the breaking change: https://github.com/pytest-dev/pytest-asyncio/releases/tag/v0.23.0 I went through the same, the new way to do it can be seen in this PR https://github.com/pytest-dev/pytest-asyncio/pull/662/files |
It says "This release is backwards-compatible with v0.21. Changes are non-breaking, unless you upgrade from v0.22." |
@albertferras-vrf the changelog mentions asyncio_event_loop mark removal, I think it is only about upgrading from 0.22. |
You are right, I forgot about that part |
That was the original intention, yes. I can reproduce the difference between v0.21.1 and v0.23 and I agree that this is a breaking change (by accident). With regards to the migration and as a workaround: The fundamental idea of v0.23 is that each pytest scope (session, package, module, class, and function) provides a separate event loop. You can decide for each test in which loop they run via the new @tkukushkin In your specific example you want to run the test in the same loop as the fixture. The @pytest.mark.asyncio(scope="session")
async def test_something(some_async_fixture):
assert asyncio.get_running_loop() is some_async_fixture See also the part about asyncio event loops in the Concepts section of the docs. @redtomato74 I'd like to hear more about your use case for two different event loops. I suggest you open a separate issue for this. |
@seifertm I'd like to have only one loop for all fixtures and tests, without additional decorators to all tests and fixtures. We have thousands of tests in hundreds of services where all tests and fixtures share one loop and it is crucial for them. Is there any workaround to emulate the old behaviour? This way https://pytest-asyncio.readthedocs.io/en/v0.23.2/how-to-guides/run_session_tests_in_same_loop.html does not consider fixtures at all( |
The linked how-to is supposed to make it easy to add the
I don't think this use case was considered during the development of v0.22 and v0.23. Can you explain why you need all tests and fixtures to run the same loop? Why
Fixtures in v0.23 generally behave like tests and choose the event loop of the fixture scope. That means if a fixture has I cannot think of a workaround to switch to the old behaviour at the moment. I suggest pinning pytest-asyncio to <0.23 until this issue is fixed. |
Sorry, but I don't understand, I have not only session scoped fixtures but also module scoped fixtures, and they should use the same event loop as session scoped fixtures. Could you please describe how to achieve it?
We write blackbox tests for microservices using pytest and pytest-asyncio. Some session scoped fixtures for example create database connection pool, which all tests can use to check database state. Another session scoped fixture in background monitors logs of subprocesses (instances of application, that we test) and captures these logs to some list, which tests can check. These subprocesses can be started by any fixture and test as async context manager. And obviously, subprocess (asyncio.subprocess) should be started with the same loop as fixture that captures logs from it. And we have way more such examples. |
Thanks for the explanation!
This is what I meant when I said your use case hasn't been considered in the development. There's currently no way to control the event loop used by a fixture independently from the fixture scope. The v0.23 release will not work for your test suite. I suggest that you downgrade to v0.21.1. I'll think of a way to control the fixture scope independently of the event loop scope. |
Yes, we have already downgraded to 0.21.1. I don't think it's gonna be a problem for us for a long time (at least until Python 3.13).
Thank you! Looking forward to the news. |
After upgrade to v 0.23 error in teardown: @pytest.fixture
def server():
from main import _create_fastapy_server
app = _create_fastapy_server()
return app
@pytest_asyncio.fixture
async def client_async(server):
app = server
async with (
app.router.lifespan_context(app),
AsyncClient(app=app, base_url="http://testserver") as client
):
yield client
@pytest.mark.asyncio
async def test_server(client_async):
"""Start - Stop""" Output:
|
@seifertm while I deeply appreciate your work, which is crucial for all python and pytest users dealing with async tests, I don't understand why you had to change the way the event loop is set up: the previous way worked just fine. Please consider restoring the previous behaviour. As for use cases, a few people provided their setups and needs in #657 (and this was my comment) |
@ffissore Thanks for the kind words and for being so upfront. I'm generally open to restoring the previous behavior, if the existing problems with it can be solved in another way. Before I give a more extensive answer: Do you take an issue with the bugs and the incompatibilities that were (involuntatrily) introduced in v0.23? Or do you think the new approach is generally flawed? |
I did not follow exactly the issue, but I wanted to mention that 0.23 broke all our pipelines at work with some strange errors and the same with one of the open source project I maintain: https://github.com/FreeOpcUa/opcua-asyncio . The solution so far has been to revert to 0.21 everywhere |
I'm afraid I'm not in a position to judge the approach: I don't know enough about the previous design and the desired long-term design. IMHO the best solution is the one that makes the end developer write as little code as possible, and only code that is strictly related to what the dev wants to do. |
It was good enough but a terrible DX honestly. It took me quite a while to figure out the first time I've seen this problem (and thought it was a bug or very bad design). If the goal here is that we don't need these five lines anymore, I'm all in and will just pin my dependency until we update our code. Let's not revert to something ugly if the new approach is better. Also, the semantic versioning meaning of 0.* versions is that they can introduce breaking changes anytime. If you don't want to hit this kind of breaking changes in the first place, just pin your dependencies... |
I think the real problem is that the migration process is not clear. I would have expected that removing the Here is a minimal reproducible example (with GitHub CI!) of a real world application that I can't migrate to 0.23 so far: https://github.com/ramnes/pytest-asyncio-706 @seifertm Any hint on how something like this should be migrated? (PR welcome on that repository.) If it's not possible to migrate, then this would be the real issue: it wouldn't be a breaking change but a loss of functionality. Otherwise we'll probably have a few bits to add to the documentation here. :) |
This worked for me. Thanks a lot |
#871 contains a preliminary patch for separating the caching and event loop scopes of async fixtures. This should address the main point of this issue. I also went through the comments again and tried to include additional requests, such as setting the default event loop scope for async fixtures via a config setting. @ramnes provided an example project which did not upgrade properly to pytest-asyncio v0.23. Using the patch, the migration effort was reduced to a 2-line change (see ramnes/pytest-asyncio-706#1). As I see it, the path forward is to finish the PR, especially documentation updates, and create a release candidate that users affected by this issue can try in their sepecific projects. |
#871 worked for me:
Edit: For clarification, I am doing this because it is required to use https://github.com/igortg/pytest-async-sqlalchemy |
#871 seems to be working for me too - running everything with session scope. Rather than overriding the Incidentally, |
FYI decorating async fixtures and test functions is not required: use "auto mode" https://pytest-asyncio.readthedocs.io/en/v0.21.1/concepts.html#auto-mode
|
Thanks for the early feedback! There's now a pre-release version (pytest-asyncio v0.24.0a0), which supports separate loop and caching scopes for async fixtures. The docs were updated correspondingly. Any feedback is much appreciated! |
@bmerry Agreed! A configuration option for default loop scope is tracked in #793 |
I ran into this issue today working on a Quart project. Pinning 0.21.2 and adding a session-scoped I think this is a separate issue since it also happens on 0.21.2, so I will open another issue for this with a minimal repro this weekend, but I had a question related to this thread as well, so I'm posting this here for now. :) This might be better as a discussion, so feel free to flag as off-topic, but I was just wondering what the motivation for all this explicit event loop management is? I'm working on a talk about asyncio and websockets for my local Python user-group, and I would like to understand the way that this library wraps async stuff for pytest a bit more, but it seems like the explicit event loop management is unnecessary when I've never had to do any explicit event loop management in my projects since I mostly use asyncio in applications where I can rely on async/await, so I'm pretty inexperienced with the lower-level APIs. Is it to isolate concurrent test setup/code? Wouldn't it be sufficient to simply collect all of the async test cases and fixtures and await them in a sync for-loop? Something like: async def one():
await asyncio.sleep(1)
print("one second")
assert False
async def two():
await asyncio.sleep(2)
print("two seconds")
assert False
async def three():
asyncio.sleep(3)
print("three seconds")
assert False
async def main():
coroutines = [one(), two(), three()]
for c in coroutines:
fut = asyncio.create_task(c)
try:
await fut
except AssertionError:
print("test failed")
# pytest failure reporting stuff.
asyncio.run(main()) I'm guessing that the reason for all of this has to do with integrating with pytest itself since pytest does a lot of meta-programming for test collection and fixture setup and stuff, so it's not as simple as writing a sync wrapper function around the async test cases, but pytest doesn't do anything async, so it still seems like a lot of extra maintenance burden on the pytest-asyncio authors to manage event loop scopes when simply running the test functions in whatever That would also let the user override the event loop if they need more control. I'm sure this wouldn't work, but something like: def use_custom_loop(custom_loop):
def outer(func):
@functools.wraps(func)
async def inner(*args, **kwargs):
current_loop = asyncio.get_event_loop()
asyncio.set_event_loop(custom_loop)
result = await func(*args, **kwargs)
asyncio.set_event_loop(current_loop)
return result
return inner
return outer Feels like it should be usable. I want to be clear that I'm not criticizing here. I really appreciate this library and all the work the maintainers put in since I do tons of async code in Python and JS. I just want to understand the motivation behind this design since from my grug-brain perspective, it seems like just using whatever loop Python gives us would be a lot simpler. |
@GrammAcc Honestly, I don't think I did a very good job explaining why the changes to pytest-asyncio are needed. Therefore, I think you questions are very relevant to this issue and I'll gladly try again to share the motivation behind the changes. After that, I'll share my thoughts on your code example.
That's why pytest-asyncio tried to come up with a new way of having event loops with different scopes without requiring a carbon copy of the event_loop fixture implementation.
Depending on the kind of testing you do, it may be desirable that each test case in your test suite is isolated from the other tests. Your specific example runs all test cases in a single asyncio event loop. That means they can potentially influence each other, for example through race conditions from background tasks or context variables. Pytest-asyncio, on the other hand, runs each async test case in an isolated asyncio event loop by default, in an effort to avoid these kinds of pitfalls. The goal is definitely to have a smooth development experience where you don't have to call low-level asyncio functions. Currently, pytest-asyncio is in a bit of a in-between state, because its trying to provide a migration path for all users towards a "no event_loop fixture overrides" state. Once this is done, a lot of internal refactorings should be unblocked, which should solve some longstanding bugs, such as #127. I hope this answers your questions! If not, feel free to follow up or to reach out otherwise :) |
@seifertm Thank you for your detailed response! That makes perfect sense, and I had no idea upstream was deprecating I definitely agree that managed event loops are better than the overridable fixture in this case, but one thing I will suggest is that I think it's okay to let the user deal with race conditions in the test suite on their own. :) I've written automated tests in several different environments (Python, NodeJS, Elixir, C++), and one thing that always comes up is that some tests simply need to access some kind of shared state (a test db for example). In a well-structured test suite, most tests will be completely isolated, but eventually, we have to actually test the real behavior of the application, and that usually involves shared state and side effects. How the programmer resolves that is usually application-dependent. For example, in one Node project I worked on, I wrote some setup code to essentially create dynamic per-thread postgres databases (yes an entire db for each thread lol) since it made sense for our environment. Different dev machines and CI VMs had different resource limitations, the ORM and web framework we were using made it difficult to isolate the db that different services used, and the full test suite needed to run in <30 seconds or so. That's not a great way to handle race conditions in db tests, but mocking wasn't an option in this case, and even though the test runner isolated tests into their own threads, the db is still a giant blob of shared mutable state, so I had to work around it somehow. FWIW, isolating tests into their own event loop also doesn't solve these problems with race conditions accessing a test db, and it seems to make it harder for the user to implement an application-appropriate solution as well since the behavior of a shared fixture/resource (with locking or whatever you decide to use to synchronize it) gives unexpected results due to the event loop isolation. I think the I guess what I'm trying to articulate is that it's probably okay to focus on making the library easy to maintain and refactor and not worry about test isolation as long as test cases don't run concurrently. At the end of the day, most async applications already have to deal with the potential for race conditions, so the application developer will usually have an idea of what will cause concurrency problems in the test suite, and if they don't, that's something that they have to learn eventually. :) Anyway, just my two cents on the topic. I appreciate everything you do here, and thank you again for taking the time to give such a detailed and thoughtful response! |
I agree 100% with this point. While it's important to give beginners a default, sane behavior out of the box, it's also important to remember that beginners will usually write simpler code. As long as the simple use-cases will be taken care of, I think it's ok to give advanced users the responsibility for the rest. Especially if it gives you an easier time maintaining the code and adding features. In my case, I'd really hope for an opt-in way to execute test cases concurrently sometime in the future (I can already think of tons of edge-cases and bugs it may cause to my tests but as said before, that's something I'm willing to have the responsibility to solve in my own code). Thank you so much for the transparency and in general for the entire work being done here. My team and I have managed to save so much time by utilizing concurrency in our tests! |
@GrammAcc @UltimateLobster Your input is much appreciated. The same sentiment was already echoed previously by ffissore in #706 (comment). Maybe pytest-asyncio is trying to do too much. I agree that software can be a pain if it tries to be smarter than the user. I don't think this is a discussion we should have right now, though. The goal of the releases since v0.21 was to make it easier for pytest-asyncio to evolve by "internalizing" the event_loop fixture. At the same time, pytest-asyncio wanted to provide a migration path for existing users. Admittedly, this introduced additional complexity, but the releases didn't try to add convenience features to outsmart the developer. Once the transition is complete, we should revisit this topic and see if and how we can simplify the functionality provided by the library. That said, I hope that v0.24 resolves this long-standing issue. Thanks to all the participants in the discussion. GitHub is currently the only channel for pytest-asyncio maintainers to understand what users really want. |
Sorry for necrobumbing this one. I just wanted to let the maintainers know that I mentioned a possible bug in #706 (comment), but there's nothing in pytest-asyncio. It was a bug with how I was connecting to different DBs when spawning servers in the test suite. I was just chasing a red herring with the event loop stuff. :) I was busy preparing for a talk at my local user group, and I forgot to report back here. Sorry for the confusion, and thanks again for such an awesome tool! |
Hi @seifertm, Unfortunately I don't think the changes in 0.24 fix my use case that has been broken since 0.21.1.
Running the test gives me this error: raise MultipleEventLoopsRequestedError( |
@greemo I cannot comment on the error message, but your use case should be supported. Starting with v0.24 fixtures can specify different scopes for caching (scope) and for the event_loop (fixture_scope). Try add |
Using the example from code for 0.24 is import asyncio
import random
from typing import Optional, List
import sys
import inspect
import pytest
import pytest_asyncio
from hypercorn.asyncio import serve
from hypercorn.config import Config
import uvloop
from fastapi import FastAPI
import httpx
app = FastAPI(
version="1.0.0", title="pytest-dev/pytest-asyncio#706",
servers=[{"url": "/", "description": "Default, relative server"}]
)
@app.get("/random", operation_id="getRandom", response_model=List[int])
def getRandom(limit: Optional[int] = 3) -> List[int]:
return [random.randrange(0, 6) for _ in range(limit)]
@pytest.fixture(scope="session")
def config(unused_tcp_port_factory):
c = Config()
c.bind = [f"localhost:{unused_tcp_port_factory()}"]
return c
@pytest_asyncio.fixture(loop_scope="session")
async def server(config):
event_loop = asyncio.get_event_loop()
try:
sd = asyncio.Event()
task = event_loop.create_task(serve(app, config, shutdown_trigger=sd.wait))
yield config
finally:
sd.set()
await task
@pytest.fixture(scope="session")
def event_loop_policy():
return uvloop.EventLoopPolicy()
class Client:
def __init__(self, url):
self.c = httpx.AsyncClient()
self.url = url
async def get(self, path):
print(f"{__file__}:{inspect.currentframe().f_lineno} {id(asyncio.get_event_loop())=}")
return await self.c.get(f"{self.url}/{path}")
@pytest_asyncio.fixture(loop_scope="session")
async def client(server):
c = Client(f"http://{server.bind[0]}")
dd = await c.get("openapi.json")
return c
@pytest.mark.asyncio(loop_scope="session")
async def test_getRandom(client):
r = await client.get("random")
assert r.status_code == 200
assert len(r.json()) == 3
@pytest.mark.asyncio(loop_scope="session")
@pytest.mark.skipif(sys.version_info < (3, 9), reason="requires asyncio.to_thread")
async def test_to_thread(client, server):
r = await asyncio.to_thread(httpx.get, f"{client.url}/openapi.json")
assert r.status_code == 200 delta --- my_pytest_test.py 2024-09-18 16:07:19.630570449 +0200
+++ my_pytest_asyncio.py 2024-10-03 14:07:50.705652032 +0200
@@ -2,6 +2,7 @@
import random
from typing import Optional, List
import sys
+import inspect
import pytest
import pytest_asyncio
@@ -19,17 +20,11 @@
)
-@app.get("/random", operation_id="getRandom", response_model=list[int])
-def getRandom(limit: int | None = 3) -> list[int]:
+@app.get("/random", operation_id="getRandom", response_model=List[int])
+def getRandom(limit: Optional[int] = 3) -> List[int]:
return [random.randrange(0, 6) for _ in range(limit)]
-@pytest.fixture(scope="session")
-def event_loop(request):
- loop = asyncio.get_event_loop_policy().new_event_loop()
- yield loop
- loop.close()
-
@pytest.fixture(scope="session")
def config(unused_tcp_port_factory):
@@ -38,10 +33,9 @@
return c
-@pytest_asyncio.fixture(scope="session")
-async def server(event_loop, config):
- policy = asyncio.get_event_loop_policy()
- asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
+@pytest_asyncio.fixture(loop_scope="session")
+async def server(config):
+ event_loop = asyncio.get_event_loop()
try:
sd = asyncio.Event()
task = event_loop.create_task(serve(app, config, shutdown_trigger=sd.wait))
@@ -49,34 +43,38 @@
finally:
sd.set()
await task
- asyncio.set_event_loop_policy(policy)
+@pytest.fixture(scope="session")
+def event_loop_policy():
+ return uvloop.EventLoopPolicy()
+
class Client:
def __init__(self, url):
self.c = httpx.AsyncClient()
self.url = url
async def get(self, path):
+ print(f"{__file__}:{inspect.currentframe().f_lineno} {id(asyncio.get_event_loop())=}")
return await self.c.get(f"{self.url}/{path}")
-@pytest_asyncio.fixture(scope="session")
-async def client(event_loop, server):
+@pytest_asyncio.fixture(loop_scope="session")
+async def client(server):
c = Client(f"http://{server.bind[0]}")
dd = await c.get("openapi.json")
return c
-@pytest.mark.asyncio
+@pytest.mark.asyncio(loop_scope="session")
async def test_getRandom(client):
r = await client.get("random")
assert r.status_code == 200
assert len(r.json()) == 3
-@pytest.mark.asyncio
+@pytest.mark.asyncio(loop_scope="session")
@pytest.mark.skipif(sys.version_info < (3, 9), reason="requires asyncio.to_thread")
-async def test_to_thread(client):
+async def test_to_thread(client, server):
r = await asyncio.to_thread(httpx.get, f"{client.url}/openapi.json")
assert r.status_code == 200
No reason to be afraid of this any longer. |
Hello! Something has been broken with the latest pytest-asyncio releases.
Consider such code:
pytest.ini:
This test passes with pytest-asyncio 0.21.1 but fails with 0.23.0. I am not sure if it's ok, but if it is, IMHO it is breaking change.
The text was updated successfully, but these errors were encountered: