Re: CEPH hardware recommendations and cluster design questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 4 Mar 2015, Adrian Sevcenco wrote:
> Hi! I seen the documentation
> http://ceph.com/docs/master/start/hardware-recommendations/ but those
> minimum requirements without some recommendations don't tell me much ...
> 
> So, from what i seen for mon and mds any cheap 6 core 16+ gb ram amd
> would do ... what puzzles me is that "per daemon" construct ...
> Why would i need/require to have multiple daemons? with separate servers
> (3 mon + 1 mds - i understood that this is the requirement) i imagine
> that each will run a single type of daemon.. did i miss something?
> (beside that maybe is a relation between daemons and block devices and
> for each block device should be a daemon?)

There is normally a ceph-osd daemon per disk.

> for mon and mds : would help the clients if these are on 10 GbE?

For the MDS latency is important, so possibly!
 
> for osd : i plan to use a 36 disk server as osd server (ZFS RAIDZ3 all
> disks + 2 ssds mirror for ZIL and L2ARC) - that would give me ~ 132 TB
> how much ram i would really need? (128 gb would be way to much i think)
> (that RAIDZ3 for 36 disks is just a thought - i have also choices like:
> 2 X 18 RAIDZ2 ; 34 disks RAIDZ3 + 2 hot spare)

Usually Ceph is deployed without raid underneath.  You can use it, 
though--ceph doesn't really care.  Performance just tends to be lower as 
compared to ceph-osd daemon's per disk.

Note that there is some support for ZFS but it is not tested by us at all, 
so you'll be mostly on your own.  I know a few users have had success here 
but I have no idea how busy their clusters are.  Be careful!

> Regarding journal and scrubbing : by using ZFS i would think that i can
> safely not use the CEPH ones ... is this ok?

You still want Ceph scrubbing as it verifies that the replicas don't get 
out of sync.  Maybe you could forgo deep scrubbing, but it may make more 
sense to disable ZFS scrubbing and let ceph drive it as you get things 
verified through the whole stack...

sage


> 
> Do you have some other advises and recommendations for me? (the
> read:writes ratios will be 10:1)
> 
> Thank you!!
> Adrian
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux