Replies: 1 comment
-
Hi @PgLoLo, unfortunately, this GitHub is for Triton Inference Server, not OpenAI Triton |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I am struggling to find a way to parameterize a Triton kernel with a constexpr callable function. For instance, I have a function f that loads two tensors, applies an element-wise binary operation, and stores the result. I have several predefined binary operations, and I’d like to avoid duplicating code for each operation. However, the following approach results in an InternalTorchDynamoError: NotImplementedError: TritonKernelVariable():
The error is understandable here, but I don’t see why a compose-on-the-fly approach doesn’t work as well:
In this approach, we define kernel code specifically for the given binary_op function, but it still results in an error:
(The amusing part is that it’s just a warning, and the code executes something, but the output tensor contains garbage values.)
Am I missing something, or is it currently impossible to metaprogram Triton kernels in order to eliminate repetitive code?
Beta Was this translation helpful? Give feedback.
All reactions