Re: ceph-volume and multi-PV Volume groups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 29, 2019 at 5:07 PM Jan Fajerski <jfajerski@xxxxxxxx> wrote:
> Hi all,
> I'd like to request feedback regarding http://tracker.ceph.com/issues/37502.
> This regards the ceph-volume lvm batch subcommand and its handling of mulit-PV
> Volume Groups. The details are in the tracker ticket, the gist is that in
> certain circumstances ceph-volume creates LVM setups where a single bad drive
> that is used for db/wal can bring down unrelated OSDs (OSDs that have their LVs
> on completely separate drives) and thus impact a cluster fault tolerance.
>
> I'm aware that one could work around this by creating the LVM setup that I want.

The much simpler workaround is to run `lvm batch` once for each db
device. (so run in twice in your 2 ssd + 10 hdd example).
We're pretty content with that workaround in our prod env.

That said, I agree that the default should be to create a vg for each
db device, because in addition to your tooling point below, (a) db's
on a multi-dev linear vg is total nonsense imho, nobody wants that in
prod, and (b) the tool seems arbitrarily inconsistent -- it creates a
unique vg per data dev but one vg for all db devs! (one vg for all
data devs is clearly bad, as is one vg for all db devices, imho).

my 2 rappen,

dan

> I think this is a bad approach though since every deployment tool has to
> implement its own LVM handling code. Imho the right place for this is exactly
> ceph-volume.
>
> Jan
>
> --
> Jan Fajerski
> Engineer Enterprise Storage
> SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
> HRB 21284 (AG Nürnberg)




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux