Re: CEPH hardware recommendations and cluster design questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you all for all good advises and much needed documentation.
I have a lot to digest :)

Adrian

On 03/04/2015 08:17 PM, Stephen Mercier wrote:
> To expand upon this, the very nature and existence of Ceph is to replace
> RAID. The FS itself replicates data and handles the HA functionality
> that you're looking for. If you're going to build a single server with
> all those disks, backed by a ZFS RAID setup, you're going to be much
> better suited with an iSCSI setup. The idea of ceph is that it takes the
> place of all the ZFS bells and whistles. A CEPH cluster that only has
> one OSD backed by that huge ZFS setup becomes just a wire-protocol to
> speak to the server. The magic in ceph comes from the replication and
> distribution of the data across many OSDs, hopefully living in many
> hosts. My own setup for instance uses 96 OSDs that are spread across 4
> hosts (I know I know guys - CPU is a big deal with SSDs so 24 per host
> is a tall order - didn't know that when we built it - been working ok so
> far) that is then distributed between 2 cabinets on 2 separate
> cooling/power/data zones in our datacenter. My CRUSH map is currently
> setup for 3 copies of all data, and laid out so that at least one copy
> is located in each cabinet, and then the cab that gets the 2 copies also
> makes sure that each copy is on a different host. No RAID needed because
> ceph makes sure that I have a "safe" amount of copies of the data, in a
> distribution layout that allows us to sleep at night. In my opinion,
> ceph is much more pleasant, powerful, and versatile to deal with than
> both hardware RAID and ZFS (Both of which we have instances of deployed
> as well from previous iterations of infrastructure deployments). Now,
> you could always create small little zRAID clusters using ZFS, and then
> give an OSD to each of those, if you wanted even an additional layer of
> safety. Heck, you could even have hardware RAID behind the zRAID, for
> even another layer. Where YOU need to make the decision is the trade-off
> between HA functionality/peace of mind, performance, and
> useability/maintainability.
> 
> Would me happy to answer any questions you still have...
> 
> Cheers,
> -- 
> Stephen Mercier
> Senior Systems Architect
> Attainia, Inc.
> Phone: 866-288-2464 ext. 727
> Email: stephen.mercier@xxxxxxxxxxxx <mailto:stephen.mercier@xxxxxxxxxxxx>
> Web: www.attainia.com <http://www.attainia.com>
> 
> Capital equipment lifecycle planning & budgeting solutions for healthcare
> 
> 
> 
> 
> 
> 
> On Mar 4, 2015, at 10:42 AM, Alexandre DERUMIER wrote:
> 
>> Hi for hardware, inktank have good guides here:
>>
>> http://www.inktank.com/resource/inktank-hardware-selection-guide/
>> http://www.inktank.com/resource/inktank-hardware-configuration-guide/
>>
>> ceph works well with multiple osd daemon (1 osd by disk),
>> so you should not use raid.
>>
>> (xfs is the recommended fs for osd daemons).
>>
>> you don't need disk spare too, juste enough disk space to handle a
>> disk failure.
>> (datas are replicated-rebalanced on other disks/osd in case of disk
>> failure)
>>
>>
>> ----- Mail original -----
>> De: "Adrian Sevcenco" <Adrian.Sevcenco@xxxxxxx>
>> À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
>> Envoyé: Mercredi 4 Mars 2015 18:30:31
>> Objet:  CEPH hardware recommendations and cluster
>> designquestions
>>
>> Hi! I seen the documentation
>> http://ceph.com/docs/master/start/hardware-recommendations/ but those
>> minimum requirements without some recommendations don't tell me much ...
>>
>> So, from what i seen for mon and mds any cheap 6 core 16+ gb ram amd
>> would do ... what puzzles me is that "per daemon" construct ...
>> Why would i need/require to have multiple daemons? with separate servers
>> (3 mon + 1 mds - i understood that this is the requirement) i imagine
>> that each will run a single type of daemon.. did i miss something?
>> (beside that maybe is a relation between daemons and block devices and
>> for each block device should be a daemon?)
>>
>> for mon and mds : would help the clients if these are on 10 GbE?
>>
>> for osd : i plan to use a 36 disk server as osd server (ZFS RAIDZ3 all
>> disks + 2 ssds mirror for ZIL and L2ARC) - that would give me ~ 132 TB
>> how much ram i would really need? (128 gb would be way to much i think)
>> (that RAIDZ3 for 36 disks is just a thought - i have also choices like:
>> 2 X 18 RAIDZ2 ; 34 disks RAIDZ3 + 2 hot spare)
>>
>> Regarding journal and scrubbing : by using ZFS i would think that i can
>> safely not use the CEPH ones ... is this ok?
>>
>> Do you have some other advises and recommendations for me? (the
>> read:writes ratios will be 10:1)
>>
>> Thank you!!
>> Adrian
>>

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux