Hello,
A couple days ago I increased the rbd cache size from the default to
256MB/osd on a small 4 node, 6 osd/node setup in a test/lab setting.
The rbd volumes are all vm images with writeback cache parameters and
steady if only a few mb/sec writes going on. Logging mostly. I
noticed the recovery throughput went down 10x - 50x . Using Ceph
nautilus. Am I seeing a coincidence or should recovery throughput tank
when rbd cache sizes go up? The underlying pools are mirrored on three
disks each on a different nodes.
Thanks!
Harry Coin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx