You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I have looked through the documentation but I can't see any indication of speed benchmarks or recommended compute to achieve a given throughput?
Given that we run jobs through a scheduler that requires setting resource requests I am wondering if you are able to shed any light on what you might consider to be sensible defaults to provide a process for:
--threads argument
cpus/cores to give a job
memory (overall or per-thread) to give a job
what sort of runtime you might expect for a typical sample with these settings
The text was updated successfully, but these errors were encountered:
Line 680 in ClairS' preprint gives you some figures about using ClairS on a whole genome. If you are distributing ClairS' job to multiple nodes by setting intervals, you will need to adjust the --chunk_size accordingly. Say if you set --thread 32 for each 5Mbp interval on a single computing node. The best chuck_size is calculated as 5Mbp/32*4, the constant 4 is because ClairS uses 4 threads for each chunk.
Hi I have looked through the documentation but I can't see any indication of speed benchmarks or recommended compute to achieve a given throughput?
Given that we run jobs through a scheduler that requires setting resource requests I am wondering if you are able to shed any light on what you might consider to be sensible defaults to provide a process for:
--threads
argumentcpus/cores
to give a jobThe text was updated successfully, but these errors were encountered: