Recovery throughput inversely linked with rbd_cache_xyz?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

A couple days ago I increased the rbd cache size from the default to 256MB/osd on a small 4 node, 6 osd/node setup in a test/lab setting.  The rbd volumes are all vm images with writeback cache parameters and steady if only a few mb/sec writes going on. Logging mostly.    I noticed the recovery throughput went down 10x - 50x .  Using Ceph nautilus.  Am I seeing a coincidence or should recovery throughput tank when rbd cache sizes go up?  The underlying pools are mirrored on three disks each on a different nodes.

Thanks!

Harry Coin

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux