You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run python ingest.py I get the error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacty of 3.81 GiB of which 100.69 MiB is free. Process 25282 has 312.37 MiB memory in use. Including non-PyTorch memory, this process has 3.15 GiB memory in use. Of the allocated memory 3.07 GiB is allocated by PyTorch, and 10.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CON
Is it possible to run it on my GPU at all or what else can I do to avoid it?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
When I run
python ingest.py
I get the error:torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacty of 3.81 GiB of which 100.69 MiB is free. Process 25282 has 312.37 MiB memory in use. Including non-PyTorch memory, this process has 3.15 GiB memory in use. Of the allocated memory 3.07 GiB is allocated by PyTorch, and 10.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CON
Is it possible to run it on my GPU at all or what else can I do to avoid it?
Beta Was this translation helpful? Give feedback.
All reactions