As that is a small cluster I hope you still don't have a lot of instances running... You can add "admin socket" to the client configuration part and then read performance information via that. IIRC that prints total bytes and IOPS, but it should be simple to read/calculate difference. This will generate one socket per volume mounted (thus the I hope you don't have many). On Mon, Dec 18, 2017 at 4:36 PM, Josef Zelenka <josef.zelenka@xxxxxxxxxxxxxxxx> wrote: > Hi everyone, > > we have recently deployed a Luminous(12.2.1) cluster on Ubuntu - three osd > nodes and three monitors, every osd has 3x 2TB SSD + an NVMe drive for a > blockdb. We use it as a backend for our Openstack cluster, so we store > volumes there. IN the last few days, the read op/s rose to around 10k-25k > constantly(it fluctuates between those two) and it doesn't seem to go down. > I can see, that the io/read ops come from the pool where we store VM > volumes, but i can't source this issue to a particular volume. Is that even > possible? Any experiences with debugging this? Any info or advice is greatly > appreciated. > > Thanks > > Josef Zelenka > > Cloudevelops > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com