Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to cuda::std::span once available #332

Open
sleeepyjack opened this issue Jul 10, 2023 · 3 comments
Open

Switch to cuda::std::span once available #332

sleeepyjack opened this issue Jul 10, 2023 · 3 comments
Labels
P3: Backlog Unprioritized type: feature request New feature request

Comments

@sleeepyjack
Copy link
Collaborator

Our hash function implementations currently make use of the following function signature:

template <typename Extent>
constexpr result_type __host__ __device__ compute_bytes(std::byte const* data, Extent size) const noexcept

We should switch to cuda::std::span instead of raw pointers, once rapidsai/rapids-cmake#399 is resolved.

@PointKernel
Copy link
Member

Using cuda::std::span may lose the benefit of the static extent.

@sleeepyjack
Copy link
Collaborator Author

Span supports a static extent as a second size_t tparam - though not as elegant as our extent type. Since compute_hash is user-facing, it's better to require a proper span argument rather than a raw pointer and size.

@PointKernel PointKernel added type: feature request New feature request P1: Should have Necessary but not critical P2: Nice to have Desired, but not necessary and removed P1: Should have Necessary but not critical labels Jul 26, 2023
rapids-bot bot pushed a commit to rapidsai/rapids-cmake that referenced this issue Oct 16, 2023
This PR separates out the libcudacxx update from #399. I am proposing to update only libcudacxx to 2.1.0, and leave thrust/cub pinned at 1.17.2 until all of RAPIDS is ready to update. Then we can move forward with #399 next.

Separating the update for libcudacxx should allow RAPIDS to use some of the new features we want while giving more time to RAPIDS libraries to migrate to CCCL 2.1.0 (particularly for breaking changes in Thrust/CUB).

**Immediate benefits of bumping only libcudacxx to 2.1.0:**
- Enables migration to Thrust/CUB 2.1.0 to be done more incrementally, because we could merge PRs using `cuda::proclaim_return_type` into cudf/etc. which would reduce the amount of unmerged code we're maintaining in the "testing PRs" while waiting for all RAPIDS repos to be ready for Thrust/CUB 2.1.0.
- Unblocks work in rmm (rapidsai/rmm#1095) and quite a few planned changes for cuCollections (such as NVIDIA/cuCollections#332, NVIDIA/cuCollections#331, NVIDIA/cuCollections#289)

**Risk Assessment:**
This should be fairly low risk because libcudacxx 2.1.0 is similar to our current pinning of 1.9.1 -- the major version bump was meant to align with Thrust/CUB and isn't indicative of major breaking changes.

Authors:
  - Bradley Dice (https://github.com/bdice)

Approvers:
  - Robert Maynard (https://github.com/robertmaynard)
  - Vyas Ramasubramani (https://github.com/vyasr)

URL: #464
sleeepyjack pushed a commit that referenced this issue Oct 16, 2023
This updates cuCollections to rapids-cmake 23.12. This comes with
rapidsai/rapids-cmake#464, which updates
libcudacxx to 2.1.0. That should unblock several cuCollections issues
such as #332,
#331,
#289.
@jrhemstad jrhemstad removed this from CCCL Nov 15, 2023
@PointKernel
Copy link
Member

@sleeepyjack Is this still desired?

@PointKernel PointKernel added P3: Backlog Unprioritized and removed P2: Nice to have Desired, but not necessary labels Oct 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P3: Backlog Unprioritized type: feature request New feature request
Projects
None yet
Development

No branches or pull requests

2 participants