ceph-disk vs. ceph-volume: both error prone

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear list,

for a few days we are disecting ceph-disk and ceph-volume to find out,
what is the appropriate way of creating partitions for ceph.

For years already I found ceph-disk (and especially ceph-deploy) very
error prone and we at ungleich are considering to rewrite both into a
ceph-block-do-what-I-want-tool.

Only considering bluestore, I see that ceph-disk creates two partitions:

Device      Start        End    Sectors   Size Type
/dev/sde1    2048     206847     204800   100M Ceph OSD
/dev/sde2  206848 2049966046 2049759199 977.4G unknown

Does somebody know, what exactly belongs onto the xfs formatted first
disk and how is the data/wal/db device sde2 formatted?

What I really would like to know is, how can we best extract this
information so that we are not depending on ceph-{disk,volume} anymore.

Any pointer for the on disk format would be much appreciated!

Best,

Nico




--
Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux