You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I encountered a [rank0]:[E1010 11:50:05.315197062 ProcessGroupNCCL.cpp:565] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=5, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600013 milliseconds before timing out., when training a LLaVA architecture LMM model with the pipeline parallel size set to 2.
First of all, I'd like to run the train process with LLM model's parameters frozen. When I train the LMM, I split it into two nodes: (vision encoder, projector, and a half of LLM) and the other half of LLM. However, the LLM's parameters are frozen in the training process, so the latter node has an empty training parameter list.
In order to avoid a ValueError: optimizer got an empty parameter list, I skip the training process in the latter node.
But I got the following error:
/usr/bin/rm: cannot remove '/usr/local/cuda/compat/lib': Read-only file system
rm: cannot remove '/usr/local/cuda/compat/lib': Read-only file system
W1010 11:39:41.458000 140602427192448 torch/distributed/run.py:757]
W1010 11:39:41.458000 140602427192448 torch/distributed/run.py:757] *****************************************
W1010 11:39:41.458000 140602427192448 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable foreach process to be 1in default, to avoid your system being overloaded, please further tune the variable foroptimal performancein your application as needed.
W1010 11:39:41.458000 140602427192448 torch/distributed/run.py:757] *****************************************
[rank1]:[W1010 11:39:59.888362044 init.cpp:767] Warning: nvfuser is no longer supported in torch script, use _jit_set_nvfuser_enabled is deprecated and a no-op (function operator())
[rank0]:[W1010 11:39:59.888371352 init.cpp:767] Warning: nvfuser is no longer supported in torch script, use _jit_set_nvfuser_enabled is deprecated and a no-op (function operator())
[rank0]:[E1010 11:50:05.315197062 ProcessGroupNCCL.cpp:565] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=5, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600013 milliseconds before timing out.
[rank0]:[E1010 11:50:06.635888164 ProcessGroupNCCL.cpp:1606] [PG 0 Rank 0] Timeout at NCCL work: 5, last enqueued NCCL work: 5, last completed NCCL work: 4.
[rank0]:[E1010 11:50:06.635906689 ProcessGroupNCCL.cpp:579] [Rank 0] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank0]:[E1010 11:50:06.635913192 ProcessGroupNCCL.cpp:585] [Rank 0] To avoid data inconsistency, we are taking the entire process down.
[rank0]:[E1010 11:50:06.637844201 ProcessGroupNCCL.cpp:1446] [PG 0 Rank 0] Process group watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=5, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600013 milliseconds before timing out.
Exception raised from checkTimeout at /opt/pytorch/pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:567 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x99 (0x7f50c0b98969 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1e1 (0x7f505bea04e1 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x222 (0x7f505bea81a2 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10f (0x7f505bea9a0f in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0xdc253 (0x7f50c06b0253 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #5: <unknown function> + 0x94ac3 (0x7f50cc7acac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #6: <unknown function> + 0x126850 (0x7f50cc83e850 in /usr/lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'what(): [PG 0 Rank 0] Process group watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=5, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600013 milliseconds before timing out.
Exception raised from checkTimeout at /opt/pytorch/pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:567 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x99 (0x7f50c0b98969 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1e1 (0x7f505bea04e1 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x222 (0x7f505bea81a2 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10f (0x7f505bea9a0f in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0xdc253 (0x7f50c06b0253 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #5: <unknown function> + 0x94ac3 (0x7f50cc7acac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #6: <unknown function> + 0x126850 (0x7f50cc83e850 in /usr/lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at /opt/pytorch/pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1450 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x99 (0x7f50c0b98969 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x10485fe (0x7f505bed05fe in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
frame #2: <unknown function> + 0xcbc925 (0x7f505bb44925 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
frame #3: <unknown function> + 0xdc253 (0x7f50c06b0253 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #4: <unknown function> + 0x94ac3 (0x7f50cc7acac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #5: <unknown function> + 0x126850 (0x7f50cc83e850 in /usr/lib/x86_64-linux-gnu/libc.so.6)
W1010 11:50:57.098000 140602427192448 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 2952340 closing signal SIGTERM
E1010 11:50:57.468000 140602427192448 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: -6) local_rank: 0 (pid: 2952339) of binary: /usr/bin/python
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 33, in<module>
sys.exit(load_entry_point('torch==2.4.0a0+07cecf4168.nv24.5', 'console_scripts', 'torchrun')())
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
Expected behavior
I'd like to train the LMM with the pipeline parallel size set to 2.
AvoidingWatchdog caught collective operation timeout:, I'd like to skip the training process in the latter node.
If you have another solution rather than skiping training process, I'll appreciate it.
The text was updated successfully, but these errors were encountered:
Describe the bug
I encountered a
[rank0]:[E1010 11:50:05.315197062 ProcessGroupNCCL.cpp:565] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=5, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600013 milliseconds before timing out.
, when training a LLaVA architecture LMM model with the pipeline parallel size set to 2.First of all, I'd like to run the train process with LLM model's parameters frozen. When I train the LMM, I split it into two nodes: (vision encoder, projector, and a half of LLM) and the other half of LLM. However, the LLM's parameters are frozen in the training process, so the latter node has an empty training parameter list.
In order to avoid a
ValueError: optimizer got an empty parameter list
, I skip the training process in the latter node.But I got the following error:
Expected behavior
I'd like to train the LMM with the pipeline parallel size set to 2.
Avoiding
Watchdog caught collective operation timeout:
, I'd like to skip the training process in the latter node.If you have another solution rather than skiping training process, I'll appreciate it.
The text was updated successfully, but these errors were encountered: