Hey, I'm not sure if it's an issue or just something went wrong in my job configuration. While using fio to benchmark s3 bandwidth with different block sizes, I got that with 500m+ block size I get more than 1gb/sec bw. Trying to recreate these astonishing results using s3fs to read, I didn't even get close to fio results. Is it an issue in fio, or something went wrong in my benchmarks? These are the parameters served me in the fio job file: [global] ioengine=http http_host=s3.us-east-1.amazonaws.com http_mode=s3 http_s3_key=${S3_SECRET_KEY} http_s3_keyid=${S3_KEY_ID} http_s3_region=us-east-1 name=aws-test filename=/mobileye-team-vd/idok/test_tv_parquet/20-03-31_17-08-46_Leo_Front_0005.snappy.parquet rw=read invalidate=1 randrepeat=0 direct=1 filesize=1300m loops=10 numjobs=10 [test-1] filename=/mobileye-team-vd/idok/test_tv_parquet/20-04-26_15-22-58_Leo_Front_0097.snappy.parquet bs=1000m filesize=1300m Thanks ahead for any help or advice! Sage