-
Notifications
You must be signed in to change notification settings - Fork 87
Issues: ROCm/AMDMIGraphX
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Fail in find_inner_broadcast due to preserve_output_layout
bug
Something isn't working
#3649
opened Nov 20, 2024 by
shivadbhavsar
Use select_module to compile single prefill/decode program for llama2
#3646
opened Nov 20, 2024 by
turneram
Investigate memory failures when workspace size is set to 0 in compile hipblaslt pass
bug
Something isn't working
#3622
opened Nov 14, 2024 by
ahsan-ca
Quantized distillgpt2 accuracy issue when quant params are inputs
bug
Something isn't working
#3612
opened Nov 11, 2024 by
shivadbhavsar
UAI llama2 script recompiles for each prompt even with prefill approach
#3601
opened Nov 8, 2024 by
turneram
Dangling quantizelinear from horizontal fusion, BERT and DistilGPT2
FP8
issues related to FP8 implemenation
INT8
Perf Improve
#3598
opened Nov 7, 2024 by
CharlieL7
Missing constant propagation: issues related to FP8 implemenation
INT8
Perf Improve
Literal
-> Multibroadcast
-> Quantizelinear
FP8
#3597
opened Nov 7, 2024 by
CharlieL7
GroupQueryAttention produces incorrect results when loaded from mxr
#3596
opened Nov 6, 2024 by
turneram
Remove hipblaslt version check in
hip_gemm_impl.cpp
for OCP FP8 types
#3592
opened Nov 5, 2024 by
CharlieL7
Improve fusions with dequantizelinear
bug
Something isn't working
high priority
A PR with high priority for review and merging.
#3551
opened Oct 24, 2024 by
causten
[BUG] Runtime Error: No HIP GPUs Available when torch imported after migraphx
#3488
opened Sep 27, 2024 by
richagadgil
Previous Next
ProTip!
Updated in the last three days: updated:>2024-11-24.