ceph osd different size to create a cluster for Openstack : asking for advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I need some guidance from you folks...

I am going to deploy a ceph cluster in HCI mode for an openstack platform.
My hardware will be :
- 03 control nodes  :
- 27 osd nodes : each node has 03x3.8To nvme + 01x1.9To nvme disks (those
disks will all be used as OSDs)

In my Openstack I will be creating all sorts of pools : RBD, Cephfs and RGW.

I am planning to create two crush rules using the disk size as a parameter.
Then divide my pools between the two rules.
- RBD to use the 3.8To disks since I need more space here.
- Cephfs and RGW to use 1.9To disks.

Is this a good configuration?

Regards
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux