Re: Creating first Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Alfredo.  I will use ceph-volume.

On Thu, Apr 19, 2018 at 4:24 PM, Alfredo Deza <adeza@xxxxxxxxxx> wrote:
On Thu, Apr 19, 2018 at 11:10 AM, Shantur Rathore
<shantur.rathore@xxxxxxxxx> wrote:
> Hi,
>
> I am building my first Ceph cluster from hardware leftover from a previous
> project. I have been reading a lot of Ceph documentation but need some help
> to make sure I going the right way.
> To set the stage below is what I have
>
> Rack-1
>
> 1 x HP DL360 G9 with
>    - 256 GB Memory
>    - 5 x 300GB HDD
>    - 2 x HBA SAS
>    - 4 x 10GBe Networking Card
>
> 1 x SuperMicro chassis with 17 x HP Enterprise 400GB SSD and 17 x HP
> Enterprise 1.7TB HDD
> Chassis and HP server are connected with 2 x SAS HBA for redundancy.
>
>
> Rack-2 (Same as Rack-1)
>
> 1 x HP DL360 G9 with
>    - 256 GB Memory
>    - 5 x 300GB HDD
>    - 2 x HBA SAS
>    - 4 x 10GBe Networking Card
>
> 1 x SuperMicro chassis with 17 x HP Enterprise 400GB SSD and 17 x HP
> Enterprise 1.7TB HDD
> Chassis and HP server are connected with 2 x SAS HBA for redundancy.
>
>
> Rack-3
>
> 5 x HP DL360 G8 with
>    - 128 GB Memory
>    - 2 x 400GB HP Enterprise SSD
>    - 3 x 1.7TB Enterprise HDD
>
> Requirements
> - To serve storage to around 200 VMware VMs via iSCSI. VMs use disks
> moderately.
> - To serve storage to some docker containers using ceph volume driver
> - To serve storage to some legacy apps using NFS
>
> Plan
>
> - Create a ceph cluster with all machines
> - Use Bluestore as osd backing ( 3 x SSD for DB and WAL in SuperMicro
> Chassis and 1 x SSD for DB and WAL in Rack 3 G8s)
> - Use remaining SSDs ( 14 x in SuperMicro and 1 x Rack 3 G8s ) for Rados
> Cache Tier
> - Update CRUSH map to make Rack as minimum failure domain. So almost all
> data is replicated across racks and in case one of the host dies the storage
> still works.
> - Single bonded network (4x10GBe) connected to ToR switches.
> - Same public and cluster network
>
> Questions
>
> - First of all, is this kind of setup workable.
> - I have seen that Ceph uses /dev/sdx names in guides, is it a good approach
> considering the disks die and can come up with different /dev/sdx identifier
> on reboot.

In the case of ceph-volume, these will not matter since it uses LVM
behind the scenes and LVM takes care of figuring out if /dev/sda1 is
now really /dev/sdb1 after
a reboot.

If using ceph-disk however, the detection is done a bit differently,
by reading partition labels and depending on UDEV triggers that
sometimes can be troublesome, specially
on reboot. In the case of a successful detection via UDEV the
non-persistent names wouldn't matter much still.

> - What should be the approx size of WAL and DB partitions for my kind of
> setup?
> - Can i install ceph in a VM and use other VMs on these hosts. Is Ceph too
> CPU demanding?
>
> Thanks,
> Shantur
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux