Peering and disk utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Speaking of rotating-media-under-filestore case(must be most common in
Ceph deployments), can peering be less greedy for disk operations
without slowing down entire 'blackhole timeout', e.g. when it blocks
client operations? I`m suffering of very long and very disk-intensive
peering process even on relatively small reweighs on more or less
significant commit on the underlying storage(50% are very hard to deal
with, 10% of disk commit way more acceptable). Recovery by itself can
be throttled low enough to not compete with I/O disk operations from
clients but slowing peering process means freezing client` I/O for
longer time, that`s all.
Cuttlefish seems to do a part of disk controller` job for merging
writes, but peering is still unacceptably long for _IOPS_-intensive
cluster(5Mb/s and 800 IOPS on every disk during peering, despite
controller aligning head movements, disks are 100% busy). SSD-based
cluster which should not die under lack of IOPS, but prices for such
thing still closer to the TrueEnterpriseStorage(tm) than any solution
I can afford.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux