Optimal OSD Configuration for 45 drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Matt,

I'd recommend setting the RAID controller to JBOD mode and letting Ceph
handle the drives directly. Since Ceph handles replication and
distribution of data, I don't see a real need for RAID behind the OSDs.
In some cases, it even results in worse performance in general and will
definitely slow things down a lot during rebuilds. Each hard drive is
controlled by an OSD process, so 1 hard drive = 1 OSD.

You might find the following useful:
http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/
and http://ceph.com/docs/master/start/hardware-recommendations/


More generally I'd recommend:

    * Get SSDs for the journals. I'd avoid using the same SSD for more
than 5 OSDs, since if the journal dies, you lose the OSDs it's servicing
as well.
    * Use a separate network for replication, bonded 1Gb links, 10Gb if
it doesn't blow your budget. It's really easy to saturate a 1Gb link
with Ceph.
    * Last I looked, the recommendations were at least 2GB of RAM and
1GHz of 1 core per OSD. This becomes more important when the cluster is
rebalancing data when adding an OSD or when an OSD fails.
    * Get a separate host for the monitors, or use a couple VMs. They
don't need to be really powerful, but keeping them separate from the OSD
hosts means the two aren't competing for resources if either is under load.

So for a 45 bay cluster, assuming no internal bays for the OS drives,
I'd recommend something like 2 OS drives in a RAID1, 34 OSDs, 9 SSDs for
journals, two 2.8GHz 6-core CPUs, and 96GB of RAM.

Hope this helps!

-Steve

On 07/24/2014 11:31 PM, Matt Harlum wrote:
> Hi,
>
> I?ve purchased a couple of 45Drives enclosures and would like to figure out the best way to configure these for ceph?
>
> Mainly I was wondering if it was better to set up multiple raid groups and then put an OSD on each rather than an OSD for each of the 45 drives in the chassis? 
>
> Regards,
> Matt
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sma310 at lehigh.edu



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux