Terrible IOPS performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi folks,

I have a three-node cluster on a 10G network with very little traffic. I have a six-OSD flash-only pool with two devices — a 1TB NVMe drive and a 256GB SATA SSD — on each node, and here’s how it benchmarks:

Oof. How can I troubleshoot this? Anthony mentioned that I might be able to run more than one OSD on the NVMe —  how is that done, and can I do it “on the fly” with the system already up and running like this? And, will more OSDs give me better IOPS?

Thanks,
Jarett
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux