Re: ceph-volume: hard to replace ceps-disk now and not good implemented

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, it sounds like it is a good way to do the preparation of the
device partition and LV in the orchestration tools.
So in this way, I think the logic of creating LV with lvm-cache also
suitable to do by uses, not in ceph-volume, right? it seems also
oversimplifying. And for simplicity, we just need to create LV with
lvmcache and pass to ceph-volume.
Regards
Ning Yao


2018-03-31 20:48 GMT+08:00 Alfredo Deza <adeza@xxxxxxxxxx>:
> On Sat, Mar 31, 2018 at 2:31 AM, Ning Yao <zay11022@xxxxxxxxx> wrote:
>> Hi, Sage
>>
>> Although we have  ceph-volume now, ceps-volume cannot work as expected
>> in most of scenarios. For example:
>>
>> when we use:
>>
>> ceph-volume lvm prepare --filestore --data /dev/sda --journal /dev/sdl1  or
>> ceph-volume lvm prepare --bluestore --data /dev/sda --block.db /dev/sdl1
>>
>> 1. we need to create a Physical Volume for /dev/sda before it can be
>> used. and ceph-volume does not help us to do that
>
> Not sure what you mean by this. In that scenario, ceph-volume will use
> /dev/sda and convert it to a single logical volume for "data". This is
> even
> how our tests work. If you are hitting errors, please give us some
> examples on how to replicate.
>
>> 2. the params of --journal or --block.db must be a partitioned disk
>> such as /dev/sdl1 or /dev/sdl1, and cannot be a raw device without
>> partition -- /dev/sdl like ceps-disk does and automatically create the
>> partition by using sgdisk.
>
> This is partially true. It can be a partition *or* a logical volume.
> ceph-volume will not create partitions for a user because
> this has been one of the most problematic pain points in ceph-disk.
> Creating a partition is simple, but programmatically understanding
> how to do partitions in any scenario that users throw at us is asking
> to go back to ceph-disk issues.
>
> ceph-volume is currently tremendously robust and I am hesitant to add
> this kind of helpfulness that has brought us issues before.
>
>> 3. we must explicitly use --journal or --block.db, --block.wal to tell
>> ceph-volume where to put the data. It cannot be automatically
>> partition or create multiple logical volumes for different usage, such
>> as
>> lvcreate --size 100M VG -n odd-block-meta-{uuid}
>> lvcreate --size {bluestore_block_db_size} VG -n osd-block-db-{uuid}
>> lvcreate --size {bluestore_block_wal_size}  VG -n osd-block-wal-{uuid}
>> lvcreate -l 100%FREE VG -n osd-block-{uuid}   (use the left space for data)
>>
>
> Again, this is oversimplifying. Creating an LV can be done in dozens
> of ways, not just this one. The code to support this deployment
> complexity
> would not be a good return-over-investment.
>
> You can just specify the data device and ceph-volume will create a
> full bluestore OSD for you, and yes, it will want to know what is your
> wal or db device
> if you want to use one.
>
>
>> it seems now hard to switch to ceph-volume from ceps-disk. And if
>> though, we need to do much extra work in ceph-ansible and help to
>> prepare all the thing above. so is there any plan to enhance the
>> function, or design spec of ceph-volume? and we may help on this.
>
> I would suggest investing in a configuration management system that
> can help automate all the use cases you need. ceph-ansible can do a
> bit of this already for you, and we are planning
> on adding more support to help out creating LVs for a user.
>
> *However*, even on ceph-ansible, I am going to push towards a
> simplistic approach, and not support all the various different ways
> you can create a logical volume. This will no doubt leave
> a lot of use cases out of luck, but it is a good way to provide
> robustness on the tooling on the things we know will work right.
>
> If using ceph-ansible, it isn't hard to pre setup your machines with
> the logical volume creation you have used as examples here.
>>
>>
>> Regards
>> Ning Yao
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux