How to deploy osd by cephadm in a particular scene?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ceph Developers:

At present, we are deploying ceph by cephadm.
We have no idea how to deploy the scenario as follows.

Disk Information:
data_device: /dev/sdb   /dev/sdc
db_device: /dev/sdm  /dev/sdn

Deploy result:
osd_service1:
osd1:  /dev/sdb is used for data. /dev/sdm is used for db.
osd2:  /dev/sdc is used for data. /dev/sdn is used for db.
osd_services2:
osd3: /dev/sdm is used for data.
osd4: /dev/sdn is used for data.

Two options are being considered by us.
1) /dev/sdm and /dev/sdn are partitioned into two partitions respectively.
osd1: /dev/sdb /dev/sdm1
osd2: /dev/sdc /dev/sdn1
osd3: /dev/sdm2
osd4: /dev/sdn2
Partitions are not supported by 'ceph-volume batch' and cephadm service is not applicable in this case. So the option is difficult to implement even if we want to change the code of cephadm.

2) /dev/sdm and /dev/sdn create multi-lvs.
osd1: /dev/sdb /dev/sdm--ceph-lvs1(It means that /dev/sdm has a lvs named ceph-lvs1)
osd2: /dev/sdc /dev/sdn--ceph-lvs1
osd3: /dev/sdm--ceph-lvs2
osd4: /dev/sdn--ceph-lvs2
osd1 and osd2 are easy to implement by cephadm. But osd3 and osd4 are difficult to implement.

Do you have a good idea for this scenario?

Thanks very much.


 

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux