Ceph, SSDs and the HBA queue depth parameter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We’re testing a full Intel SSD Ceph cluster on mimic with bluestore and I’m currently trying to squeeze some better performances from it. We know that on older storage solutions, sometimes increasing the queue_depth for the HBA can speed up the IO. Is this also the case for CEPH? Is queue depth even something to think about considering we're using a full SSD setup?

Best regards,

Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux