fast_read in EC pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Cephalopodians,

in the few remaining days when we can still play at our will with parameters,
we just now tried to set:
ceph osd pool set cephfs_data fast_read 1
but did not notice any effect on sequential, large file read throughput on our k=4 m=2 EC pool. 

Should this become active immediately? Or do OSDs need a restart first? 
Is the option already deemed safe? 

Or is it just that we should not expect any change on throughput, since our system (for large sequential reads)
is purely limited by the IPoIB throughput, and the shards are nevertheless requested by the primary OSD? 
So the gain would not be in throughput, but the reply to the client would be slightly faster (before all shards have arrived)? 
Then this option would be mainly of interest if the disk IO was congested (which does not happen for us as of yet)
and not help so much if the system is limited by network bandwidth. 

Cheers,
	Oliver

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux