Re: NVMe's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> How they did it?

You can create partitions / LVs by hand and build OSDs on them, or you can use

ceph-volume lvm batch –osds-per-device

> I have an idea to create a new bucket type under host, and put two LV from each ceph osd VG into that new bucket. Rules are the same (different host), so redundancy won't be affected

CRUSH lets you do that, but to what end?  It would visually show you a bit more clearly which OSDs share a device when you run `ceph osd tree`, maybe some operational convenience with `ceph osd ls-tree`, but for placement anti-affinity it wouldn’t get you anything you don’t already have.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux