Re: ceph-volume simple disk scenario without LVM for OSD on PVC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't think to rehearse the list of features of LVM all the time is
a valid argument as we are not even using 10% what LVM is capable of,
and everything we do with it could be done with partitions.
The only thing I see is a new layer that adds complexity to the setup,
a tool that (as Alfredo said, "The amount of internals ceph-volume has
to deal specifically with LVM is enormous") spends most its time
figuring out how things are layered.
I guess I'd be willing to accept LVM a bit more if someone can give me
a single feature of LVM that we desperately need and that nothing else
has.

As of today, I don't think we have demonstrated the value of using LVM
instead of block/partitions because, again, all the races we found
with ceph-disk were ultimately fixed, and they did not apply to
containers!

With the adoption and growth of container, all the logic from the host
with udev rules and systemd is impossible to replicate without
over-engineering our containerized environment.
The simple fact that devices are held by LVM (active) is a nightmare
to work with when running on PVC in the Cloud, and we are seeing a
higher demand for running Rook in the Cloud.
Also, the time we spent on tuning lvm flags for containers is ridiculous.
Device mobility across host is something we had with ceph-disk, and we
lost it with LVM without applying manual commands (to
activate/de-activate VG), and now we need it back when running
portable OSD in the Cloud...

Additionally, we live with too many dependencies on the host (lvm
packages needed, lvm systemd), which sounds a bit silly when running
on k8s since we (in theory) have no possible interaction with the host
configuration.
Sorry, this has turned again into a ceph-disk/ceph-volume discussion...

Thanks!
–––––––––
Sébastien Han
Senior Principal Software Engineer, Storage Architect

"Always give 100%. Unless you're giving blood."

On Wed, Dec 4, 2019 at 12:13 PM Lars Marowsky-Bree <lmb@xxxxxxxx> wrote:
>
> On 2019-12-03T19:58:55, Sage Weil <sage@xxxxxxxxxxxx> wrote:
>
> > An LVM-less approach is appealing.
>
> I'm not sure.
>
> LVM provides many management functions - such as being able to
> transparently re-map/move data from one device to another, or increasing
> LV sizes, say - that otherwise would need to be reimplemented by the OSD
> processes.
>
> > Something that explicitly does bluestore only and does not support dmcrypt
> > could be pretty straightforward, though...
>
> Yes, but given the current deployment ratios, I'd bet this would also
> not be applicable to the majority of deployments. Yes, it'd get the very
> very simple ones of the ground, but we'd still have to solve the actual
> problems. (Plus then maintain the "simple" way on top, and how to go
> from there to the more complex one, feature-disparities, DriveGroups for
> each, etc etc)
>
>
> --
> SUSE Software Solutions Germany GmbH, MD: Felix Imendörffer, HRB 36809 (AG Nürnberg)
> "Architects should open possibilities and not determine everything." (Ueli Zbinden)
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
>
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux