Re: ceph-disk vs. ceph-volume: both error prone

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/02/2018 21:56, Alfredo Deza wrote:
On Fri, Feb 9, 2018 at 10:48 AM, Nico Schottelius
<nico.schottelius@xxxxxxxxxxx> wrote:

Dear list,

for a few days we are disecting ceph-disk and ceph-volume to find out,
what is the appropriate way of creating partitions for ceph.

ceph-volume does not create partitions for ceph


For years already I found ceph-disk (and especially ceph-deploy) very
error prone and we at ungleich are considering to rewrite both into a
ceph-block-do-what-I-want-tool.

This is not very simple, that is the reason why there are tools that
do this for you.


Only considering bluestore, I see that ceph-disk creates two partitions:

Device      Start        End    Sectors   Size Type
/dev/sde1    2048     206847     204800   100M Ceph OSD
/dev/sde2  206848 2049966046 2049759199 977.4G unknown

Does somebody know, what exactly belongs onto the xfs formatted first
disk and how is the data/wal/db device sde2 formatted?

If you must, I would encourage you to try ceph-disk out with full
verbosity and dissect all the system calls, which will answer how the
partitions are formatted


What I really would like to know is, how can we best extract this
information so that we are not depending on ceph-{disk,volume} anymore.

Initially you mentioned partitions, but you want to avoid ceph-disk
and ceph-volume wholesale? That is going to take a lot more effort.
These tools not only "prepare" devices
for Ceph consumption, they also "activate" them when a system boots,
it talks to the cluster to register the OSDs, etc... It isn't just
partitioning (for ceph-disk).

I personally find it very annoying that ceph-disk tries to be friends with all the init-tools that are with all linuxes. Let alone all the udev stuff that starts working on disks once they are introduced in the system.

And for FreeBSD I'm not suggesting to use that since it does not fit with with the FreeBSD paradigm that things like this are not really automagically started.

So if it is only about creating the ceph-infra, things are relatively easy.

The actual work on the partitions is done with ceph-osd --mkfs and there is little magic about it. And then some more options tell where the parts for BlueStore go if you want something that is not the STD location.

Also a large part of ceph-disk is complicated/abfuscated by desires to run on crypted disks and or multipath disk providers... Running it with verbose on, gives a bit of info, but the python-code is convoluted and complex until you have it figured out. Then it starts to become simpler, but never easy. ;-)

Writing a script that does what ceph-disk does? Take a look at src/vstart in the source. That script builds a full cluster during testing and is way more legible. I did so for my FreeBSD multi-server cluster tests, and it is not complex at all.

Just my 2cts,
--WjW
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux