You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, S3 has limit on the number of GET/HEAD operation up to 5,500 per second per partitioned prefix, once this limit is reached than the read operation will throw 503 errors. What we noticed is that if the client starts seeing 503 error, the entire data loading speed will drop indefinitely until the end of the data loading process even if the 503 errors are recovered.
Question
Does the s3 client has retry logic in the case of 503 errors? if not, would failed S3 GET/HEAD request block the entire loading thread defined in num_parallel_reads field? Thanks
The text was updated successfully, but these errors were encountered:
Environment:
S3 loading client: tf.data.TFRecordDataset.
Issue
By default, S3 has limit on the number of GET/HEAD operation up to 5,500 per second per partitioned prefix, once this limit is reached than the read operation will throw 503 errors. What we noticed is that if the client starts seeing 503 error, the entire data loading speed will drop indefinitely until the end of the data loading process even if the 503 errors are recovered.
Question
Does the s3 client has retry logic in the case of 503 errors? if not, would failed S3 GET/HEAD request block the entire loading thread defined in
num_parallel_reads
field? ThanksThe text was updated successfully, but these errors were encountered: