-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop memory resources in libcu++ #2860
Conversation
I assumed we will just fork the resources into |
654d058
to
01200f6
Compare
🟨 CI finished in 1h 05m: Pass: 98%/394 | Total: 1d 22h | Avg: 7m 02s | Max: 45m 10s | Hits: 93%/25617
|
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
CUB | |
Thrust | |
+/- | CUDA Experimental |
python | |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
+/- | CUB |
+/- | Thrust |
+/- | CUDA Experimental |
+/- | python |
+/- | CCCL C Parallel Library |
+/- | Catch2Helper |
🏃 Runner counts (total jobs: 394)
# | Runner |
---|---|
326 | linux-amd64-cpu16 |
28 | linux-arm64-cpu16 |
25 | linux-amd64-gpu-v100-latest-1 |
15 | windows-amd64-cpu16 |
🟨 CI finished in 1h 00m: Pass: 99%/394 | Total: 1d 23h | Avg: 7m 17s | Max: 47m 05s | Hits: 87%/25617
|
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
CUB | |
Thrust | |
+/- | CUDA Experimental |
python | |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
+/- | CUB |
+/- | Thrust |
+/- | CUDA Experimental |
+/- | python |
+/- | CCCL C Parallel Library |
+/- | Catch2Helper |
🏃 Runner counts (total jobs: 394)
# | Runner |
---|---|
326 | linux-amd64-cpu16 |
28 | linux-arm64-cpu16 |
25 | linux-amd64-gpu-v100-latest-1 |
15 | windows-amd64-cpu16 |
ae515c7
to
7237b2a
Compare
…em to be usable in stream order
34668a2
to
c8085f2
Compare
🟨 CI finished in 1h 12m: Pass: 99%/396 | Total: 2d 06h | Avg: 8m 19s | Max: 56m 54s | Hits: 83%/22041
|
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
CUB | |
Thrust | |
+/- | CUDA Experimental |
python | |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
+/- | CUB |
+/- | Thrust |
+/- | CUDA Experimental |
+/- | python |
+/- | CCCL C Parallel Library |
+/- | Catch2Helper |
🏃 Runner counts (total jobs: 396)
# | Runner |
---|---|
327 | linux-amd64-cpu16 |
28 | linux-arm64-cpu16 |
26 | linux-amd64-gpu-v100-latest-1 |
15 | windows-amd64-cpu16 |
🟩 CI finished in 2h 29m: Pass: 100%/396 | Total: 2d 07h | Avg: 8m 21s | Max: 56m 54s | Hits: 83%/22041
|
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
CUB | |
Thrust | |
+/- | CUDA Experimental |
python | |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
+/- | CUB |
+/- | Thrust |
+/- | CUDA Experimental |
+/- | python |
+/- | CCCL C Parallel Library |
+/- | Catch2Helper |
🏃 Runner counts (total jobs: 396)
# | Runner |
---|---|
327 | linux-amd64-cpu16 |
28 | linux-arm64-cpu16 |
26 | linux-amd64-gpu-v100-latest-1 |
15 | windows-amd64-cpu16 |
cudax/include/cuda/experimental/__container/uninitialized_async_buffer.cuh
Outdated
Show resolved
Hide resolved
cudax/include/cuda/experimental/__container/uninitialized_async_buffer.cuh
Outdated
Show resolved
Hide resolved
Q: Wouldn't this break existing users? |
All work on memory resources is behind an experimental feature flag, so its possible to break that code |
🟨 CI finished in 3h 00m: Pass: 99%/396 | Total: 1d 20h | Avg: 6m 44s | Max: 46m 28s | Hits: 93%/22041
|
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
CUB | |
Thrust | |
+/- | CUDA Experimental |
python | |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
+/- | CUB |
+/- | Thrust |
+/- | CUDA Experimental |
+/- | python |
+/- | CCCL C Parallel Library |
+/- | Catch2Helper |
🏃 Runner counts (total jobs: 396)
# | Runner |
---|---|
327 | linux-amd64-cpu16 |
28 | linux-arm64-cpu16 |
26 | linux-amd64-gpu-v100-latest-1 |
15 | windows-amd64-cpu16 |
Genius! |
I see a few projects using this feature, we may want to find a way to keep them informed |
I believe a lot of people use that flag because stream_ref is also behind that, I could not find any reference to |
🟩 CI finished in 15h 12m: Pass: 100%/396 | Total: 1d 20h | Avg: 6m 46s | Max: 46m 28s | Hits: 93%/22041
|
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
CUB | |
Thrust | |
+/- | CUDA Experimental |
python | |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
+/- | libcu++ |
+/- | CUB |
+/- | Thrust |
+/- | CUDA Experimental |
+/- | python |
+/- | CCCL C Parallel Library |
+/- | Catch2Helper |
🏃 Runner counts (total jobs: 396)
# | Runner |
---|---|
327 | linux-amd64-cpu16 |
28 | linux-arm64-cpu16 |
26 | linux-amd64-gpu-v100-latest-1 |
15 | windows-amd64-cpu16 |
This moves the memory resources from libcu++ to cudax and also makes the pinned and managed one satisfy the
async_resource
concept