Re: Ceph Production Environment Setup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks.  I have upgraded all the systems to quad core processor
machine with 32 GB RAM although i still have 16 hard drives on each of
the storage nodes.

16 hard drives means i should have 16 OSD daemon but i dont know what
the OSD configuration should look like in ceph.conf.

I mounted the disk under the OSD data directory according to
http://ceph.com/docs/master/rados/deployment/mkcephfs/
the mount looks like this:
/dev/sda1 on /var/lib/ceph/osd/ceph-0
/dev/sdb1 on /var/lib/ceph/osd/ceph-1
/dev/sdc1 on /var/lib/ceph/osd/ceph-2
/dev/sdd1 on /var/lib/ceph/osd/ceph-3
/dev/sde1 on /var/lib/ceph/osd/ceph-4
/dev/sdf1 on /var/lib/ceph/osd/ceph-5
/dev/sdg1 on /var/lib/ceph/osd/ceph-6
/dev/sdh1 on /var/lib/ceph/osd/ceph-7
/dev/sdi1 on /var/lib/ceph/osd/ceph-8
/dev/sdj1 on /var/lib/ceph/osd/ceph-9
/dev/sdk1 on /var/lib/ceph/osd/ceph-10
/dev/sdl1 on /var/lib/ceph/osd/ceph-11
/dev/sdm1 on /var/lib/ceph/osd/ceph-12
/dev/sdn1 on /var/lib/ceph/osd/ceph-13
/dev/sdo1 on /var/lib/ceph/osd/ceph-14
/dev/sdp1 on /var/lib/ceph/osd/ceph-15

BUT I dont know how the OSD configuration should look like?  I see the
following in this ceph reference :
http://ceph.com/docs/master/rados/deployment/mkcephfs/

"For each [osd.n] section of your configuration file, specify the
storage device. For example:
[osd.1]
        devs = /dev/sda
[osd.2]
        devs = /dev/sdb "

I guess this is a configuration for one hard drive. What should the
OSD config be with 16 drives in one host?



Regards,
Femi.


On Tue, Jan 29, 2013 at 1:39 PM, Martin B Nielsen <martin@xxxxxxxxxxx> wrote:
> There is also the hardware recommendation page in the ceph docs (
> http://ceph.com/docs/master/install/hardware-recommendations/ )
>
> Basically they recommend something like ~ 1GHz CPU (or 1 core/osd),
> 500M-1GB RAM pr OSD daemon. Also most run with 1 OSD daemon pr. disk
> (so if you put 16x disk pr. node you'll vastly overpower your atom
> cpu)
>
> Overall, while the cluster chugs along happily the hw specs are
> relatively modest; as soon as it starts to recover you'll see high
> cpu/mem usage.
>
> Cheers,
> Martin
>
>
> On Tue, Jan 29, 2013 at 3:56 AM, femi anjorin <femi.anjorin@xxxxxxxxx> wrote:
>>
>> Please can anyone  an advise  on how exactly a CEPH production
>> environment should look like? and what the configuration files should
>> be. My hardwares include the following:
>>
>> Server A, B, C configuration
>> CPU - Intel(R) Core(TM)2 Quad  CPU   Q9550  @ 2.83GHz
>> RAM - 16GB
>> Hard drive -  500GB
>> SSD - 120GB
>>
>> Server D,E,F,G,H,J configuration
>> CPU - Intel(R) Atom(TM) CPU D525   @ 1.80GHz
>> RAM - 4 GB
>> Boot drive -  320gb
>> SSD - 120 GB
>> Storage drives - 16 X 2 TB
>>
>> I am thinking of these configurations but i am not sure.
>> Server A - MDS and MON
>> Server B - MON
>> Server C - MON
>> Server D, E,F,G,H,J - OSD
>>
>> Regards.
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux