Re: New best practices for osds???

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17/7/19 1:12 am, Stolte, Felix wrote:
> Hi guys,
>
> our ceph cluster is performing way less than it could, based on the disks we are using. We could narrow it down to the storage controller (LSI SAS3008 HBA) in combination with an SAS expander. Yesterday we had a meeting with our hardware reseller and sale representatives of the hardware manufacturer to resolve the issue.
>
> They told us, that "best practices" for ceph would be to deploy disks as Raid 0 consisting of one disk using a raid controller with a big writeback cache. 
>
> Since this "best practice" is new to me, I would like to hear your opinion on this topic.

It been my understanding that from day one the best practice for CEPH
was one disk in JBOD (no raid of any kind) and as close to 1 to 1 on the
interface to the controller as possible.

With spinning rust you can use a SAS expander, a single drive can not
saturate the link but SSD has to be 1 to 1.

Mike

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux