ceph and openstack throttling experience

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

We're running ceph nautilus 14.2.21 (going to octopus latest in a few weeks) as volume and instance backend for our openstack vm's. Our clusters run somewhere between 500 - 1000 OSDs on SAS HDDs with NVMe's as journal and db device

Currently we do not have our vm's capped on iops and throughput. We regularly get slowops warnings (once or twice per day) and wonder whether there are more users with sort of the same setup that do throttle their openstack vm's.

- What kind of numbers are used in the field for IOPS and throughput limiting?

- As a side question, is there an easy way to get rid of the slowops warning besides restarting the involved osd. Otherwise the warning seems to stay forever

Regards

Marcel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux