Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: How to enable GPU in eynollah? i dont see better performance with my gpu #123

Open
FabricioTeran opened this issue Mar 21, 2024 · 6 comments
Labels
GPU Everything to do with GPU, CUDA, cuDNN

Comments

@FabricioTeran
Copy link

Hi, how can i enable GPU mode in eynollah? i have installed all tools needed for cuda, running nvidia-smi works fine, but i dont see a better performance of eynollah... my card is a gtx 950m with 480 cuda cores, i have monitored its performance but its usage doesnt increase from 0%

The versions of the tools are: ubuntu 22.04, Python 3.9.13, tensorflow 2.12, cuda toolkit 11.8, cuDNN 8.6, and the drivers of my graphic card are version 520 and also supports cuda 12

  • The nvidia-smi works fine
@cneud
Copy link
Member

cneud commented Mar 22, 2024

Thank you for the question. Your GPU/CUDA/cuDNN setup seems valid and there is no other specific action needed to have Eynollah make use of the GPU. However, if I am not mistaken the gtx 950m is a notebook GPU and the speed gains in inference will likely be rather insignificant.

On the other hand, the Eynollah code is also not currently optimized to really leverage a GPU to its full extent. Several processing steps are currently implemented for CPU only and the GPU needs to wait for these, which is why nvidia-smi is probably only showing utilization for very short durations (at least in our case - see also #84). We hope to improve this over the course of the coming months, but as this requires considerable refactoring of the code base and possibly even the re-training of models, I'm afraid it will take time.

@cneud cneud added the GPU Everything to do with GPU, CUDA, cuDNN label Mar 22, 2024
@FabricioTeran
Copy link
Author

Ok thank you for the fast response, if you can make a refactoring it would be very helpfull to me to being able to contribute on some features (since im not from the AI filed, but a full stack developer)

@cneud
Copy link
Member

cneud commented Mar 22, 2024

Thanks! We are planning to make a new release in the course of next week to fix some pressing issues regarding integration of Eynollah in OCR-D - which should then also serve as a suitable basis for a more extensive refactoring process. Keep an eye on the most recently updated branches and I will also try and keep you updated here in due course.

@jbarth-ubhd
Copy link

I seem to remember having had similar experiences ... within cluster gpu or our rtx 3090.

@cneud
Copy link
Member

cneud commented Sep 20, 2024

@jbarth-ubhd Thanks for letting us know. I assume this was using Eynollah through its OCR-D interface?

We have been actively working on the codebase and there will be new releases shortly, including for OCR-D, with improvements - though likely not yet including code optimizations for GPU, this is something we plan to adress in 2025.

@jbarth-ubhd
Copy link

Yes, through OCR-D interface, approx. 1-2 years ago.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GPU Everything to do with GPU, CUDA, cuDNN
Projects
None yet
Development

No branches or pull requests

3 participants