Re: LVM+bluestore via ceph-volume vs bluestore via ceph-disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 1, 2018 at 12:44 AM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
> I recently became aware that LVM has become a component of the preferred OSD
> provision process when using ceph-volume. We'd already started our migration
> to bluestore before ceph-disk's deprecation was announced and decided to
> stick with the process with which we started.
>
> I'm concerned my decision may become negative in the future. Are there any
> plans for future features in Ceph to be dependent on LVM?

We reverted the deprecation warning for ceph-disk, and that will be
removed by the next Luminous release (12.2.3). Since the introduction
of ceph-volume, LVM will be a requirement in Ceph.

It isn't absolutely necessary to use LVM with your OSDs however.
ceph-volume has the ability to manage activation/startup of OSDs
that have been created differently. With just a few entries in a JSON
configuration, it can take over legacy deployed OSDs (via ceph-disk or
other means), or
manually created OSDs.

>
> I'm specifically concerned about a dependency for CephFS snapshots once they
> are announced as stable.

I am not sure how CephFS would be affected?

>
> Aside from disk enumeration, what is driving the preference for LVM?
>
The previous tool depended on partitioning and udev to handle devices.
It just would not work if a device wasn't able to get partitioned.

The udev approach meant that devices would be discovered
asynchronously, and a lock would have to be placed to wait for other
devices for a specific OSD. This approach
meant that in several cases OSDs would just not come up, it would not
be possible to replicate the problem, it was very hard to implement
fixes, and it also would take a lot of time
from system boot to fully operational status.

ceph-volume does without any of this, and will start OSDs as soon as
they are available. By being able to use LVM, it opened up to using
other technologies like dmcache for example, or flexibility
for handling space from devices (you can now run several OSDs from a
single device). All of which was just not possible with ceph-disk


> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux