On 05/13/2013 10:01 AM, Gandalf Corvotempesta wrote:
2013/5/13 Greg <itooo@xxxxxxxxx>:
thanks a lot for pointing this out, it indeed makes a *huge* difference !
# dd if=/mnt/t/1 of=/dev/zero bs=4M count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB) copied, 5.12768 s, 81.8 MB/s
(caches dropped before each test of course)
What if you set 1024 or greater value ?
Is bandwidth relative to the read ahead size?
It may help with sequential reads, but it may also slow down small
random reads if you set it too big. Probably a whole new article could
be written on testing the effects of read_ahead at difference levels in
the storage stack.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com