Re: Reintegrating "ceph-disk list" feature for supporting the existing clusters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 3, 2018 at 10:57 AM, Erwan Velu <evelu@xxxxxxxxxx> wrote:
> I forgot to add sebastien han in cc as he helped me in the writing of this.
>
> ----- Mail original -----
> De: "Erwan Velu" <evelu@xxxxxxxxxx>
> À: "Ceph Development" <ceph-devel@xxxxxxxxxxxxxxx>
> Envoyé: Mardi 3 Juillet 2018 16:53:43
> Objet: Reintegrating "ceph-disk list" feature for supporting the existing clusters
>
> Hi Fellows,
>
> PR https://github.com/ceph/ceph/pull/22343/ removed ceph-disk from ceph starting on the future nautilus release.
>
> This tool is useful for ceph-ansible (and potentially other deployment tools) to detect disk's that were prepared by ceph-disk. Also:
>
> * it ensures that the expected configuration requested by the user is already deployed
> * it detects some forgotten disks from existing cluster or previous deployments.
>
> Currently, not having the "ceph-disk list" feature means that we have to recode that function inside ceph-ansible which will be a long task & error-prone.

This is actually not that difficult, and it is what ceph-volume does
to scan ceph-disk OSDs:

* look if device has partitions, if it does see if any label is 'ceph data'
* pass the 'ceph data' partition to `ceph-volume simple scan --stdout`
to scan all the important bits from the OSD

In this example, I deployed a ceph-disk OSD on /dev/sdc, which
produced a few partitions, using my example workflow from above:

# check for 'ceph data'
$ lsblk -P -p -o PARTLABEL,NAME /dev/sdc
PARTLABEL="" NAME="/dev/sdc"
PARTLABEL="ceph data" NAME="/dev/sdc1"
PARTLABEL="ceph block" NAME="/dev/sdc2"

# pass /dev/sdc1 to scan
$ ceph-volume simple scan --stdout /dev/sdc1
Running command: /sbin/cryptsetup status /dev/sdc1
 stderr: Device sdc1 not found
{
    "active": "ok",
    "block": {
        "path": "/dev/disk/by-partuuid/f6ed2c14-1562-4967-b8c2-7cd457a624fb",
        "uuid": "f6ed2c14-1562-4967-b8c2-7cd457a624fb"
    },
    "block_uuid": "f6ed2c14-1562-4967-b8c2-7cd457a624fb",
    "bluefs": 1,
    "ceph_fsid": "a25d19a6-7d57-4eda-b006-78e35d2c4d9f",
    "cluster_name": "ceph",
    "data": {
        "path": "/dev/sdc1",
        "uuid": "02eba152-409c-4115-8402-680186f3d136"
    },
    "fsid": "02eba152-409c-4115-8402-680186f3d136",
    "keyring": "AQBFmjtb6DGrLRAA6vDIixGJHCc0RMDim6nVqA==",
    "kv_backend": "rocksdb",
    "magic": "ceph osd volume v026",
    "mkfs_done": "yes",
    "ready": "ready",
    "systemd": "",
    "type": "bluestore",
    "whoami": 13
}

> Every missed use-case/functionality of that portage will result in a possible incident on our user base.
>
> Instead of asking ceph-volume developers to re-integrate this ceph-disk functionality in ceph-volume, we propose to re-integrate a sub-portion of ceph-disk. So basically:
> * reinject src/ceph-disk/main.py in the project (it only has python dependencies)

This is not that easy, and yes, it has dependencies (ceph-detect-init
is one of them). Which would mean that the single main.py file would
not work without it

> * strip the tooling to only keep the "list" feature

Again, this is not that easy, the ceph-disk code was very intermingled
and tightly coupled.

> * renaming the tool to avoid confusing is also an option
> * add it back to the rpm & deb

This is also a lot of very complicated work. To add it back there are
lots of places where it needs to be added, it isn't just a matter of
adding a single line in the spec file. For example
our packaging rules require executables to have a man page. Adding the
man page needs other packaging rules, and adding back the ceph-disk
man page and altering the CmakeList files
that include that.

> * don't reintegrate documentation, systemd, udev rules as we don't need them

But we do need them (see example above about packaging rules)

> * add a disclaimer banner explaining the new purpose of the tool during each execution so users won't be confused
>
> That would guarantee the same behavior as we had before the removal and avoid a rewriting effort trying to match all the features.
> The actual ceph-volume implementation doesn't guarantee the coverage we have in ceph-disk for these legacy setups.

And I would disagree here, because ceph-volume does work with
ceph-disk OSDs, for bluestore and filestore, and for plain and LUKS
dmcrypt setups.

>
> Erwan,
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux