Re: Ceph New Cluster Configuration Recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks!, I forgot to mentioned that we are using D2200sb Storage Blade for the disks inside the Enclosure.
 
 

German Anders







 
--- Original message ---
Asunto: Re: Ceph New Cluster Configuration Recommendations
De: Alfredo Deza <alfredo.deza@xxxxxxxxxxx>
Para: German Anders <ganders@xxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx <ceph-users@xxxxxxxxxxxxxx>
Fecha: Wednesday, 18/12/2013 13:59

On Wed, Dec 18, 2013 at 11:46 AM, German Anders <ganders@xxxxxxxxxxxx> wrote:
   Hi to All,

        I'm new to the Ceph community. I've found some used hardware on our
datacenter and i want to create a new Ceph cluster, a little bit more
powerful (i know that the disks are not good at all, each node has 10 x 72GB
SAS disks and 2 x 500GB SATA disks (that's not to much space), but this is
for testing the solution. The hardware is the following:

OS Running: Ubuntu 12.10 Server 64 bits (on 2 x 140GB w/RAID-1)

ceph-node01(mon) 10.77.0.101 ProLiant BL460c G7 32GB 8 x 2
Ghz
ceph-node02(mon) 10.77.0.102 ProLiant BL460c G7 64GB 8 x 2
Ghz
ceph-node03(mon) 10.77.0.103 ProLiant BL460c G6 32GB 8 x 2
Ghz
ceph-node04 10.77.0.104 ProLiant BL460c G7 32GB 8
x 2 Ghz
ceph-node05(deploy) 10.77.0.105 ProLiant BL460c G6 32GB 8 x 2
Ghz

All the Blades are inside of the same enclosure so the connectivity is going
to be through the same SW (one cable) of 1Gb, i know that this is NOT good
at all but again is for testing, and is what we got for the moment.

Could someone give me some hints on configured this hardware the Best
possible way? i mean where to put the Journals, how many OSD's per node, i
thought to use 2 x Journal disks in a 1:5 relation, so 1 journal for 5
OSD's. Is going to be better to use 2 x 72GB SAS disks as the Journals
right?, any other recommendations? Also when running the "ceph-deploy new"
command to create the cluster how can i specify the name of the cluster?

ceph-deploy takes a few "global" arguments, one of them being the
cluster name (see `ceph-deploy --help`).

Keep in mind that because it is a global flag, all subsequent commands
will need to have that flag passed in. This is
basically because the naming conventions will accommodate the name of
the cluster, which will default to "ceph" if
not specified.



Again sorry if they are very newbie questions but as i said im new to Ceph.

Thanks in advance,

Best regards,


German Anders









_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux