Re: BlueStore upgrade steps broken

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Correct, I meant a partition of a block device.  By 'raw partition I just meant that there wasn't any LVM layer like in your example.  Thanks Alfredo.

On Fri, Aug 17, 2018 at 3:05 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
On Fri, Aug 17, 2018 at 2:55 PM, David Turner <drakonstein@xxxxxxxxx> wrote:
> Does the block and/or wal partition need to be an LV?  I just passed
> ceph-volume the raw partition and it seems to be working fine.

A raw device is only allowed for data, but a partition is allowed for
wal/block.db

Not sure if by "raw partition" you mean an actual partition or a raw device

>
> On Fri, Aug 17, 2018 at 2:54 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
>>
>> On Fri, Aug 17, 2018 at 10:24 AM, Robert Stanford
>> <rstanford8896@xxxxxxxxx> wrote:
>> >
>> >  I was using the ceph-volume create command, which I understand combines
>> > the
>> > prepare and activate functions.
>> >
>> > ceph-volume lvm create --osd-id 0 --bluestore --data /dev/sdc --block.db
>> > /dev/sdb --block.wal /dev/sdb
>> >
>> >  That is the command context I've found on the web.  Is it wrong?
>>
>> It is very wrong :(
>>
>> If this was coming from our docs, it needs to be fixed because it will
>> never work.
>>
>> If you really want to place both block.db and block.wal on /dev/sdb,
>> you will need to create one LV for each. ceph-volume will not do this
>> for you.
>>
>> And then you can pass those newly created LVs like:
>>
>>     ceph-volume lvm create --osd-id 0 --bluestore --data /dev/sdc
>> --block.db sdb-vg/block-lv --block.wal sdb-vg/wal-lv
>>
>>
>>
>> >
>> >  Thanks
>> > R
>> >
>> > On Fri, Aug 17, 2018 at 5:55 AM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
>> >>
>> >> On Thu, Aug 16, 2018 at 9:00 PM, Robert Stanford
>> >> <rstanford8896@xxxxxxxxx> wrote:
>> >> >
>> >> >  I am following the steps to my filestore journal with a bluestore
>> >> > journal
>> >> >
>> >> > (http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/).
>> >> > It
>> >> > is broken at ceph-volume lvm create.  Here is my error:
>> >> >
>> >> > --> Zapping successful for: /dev/sdc
>> >> > Preparing sdc
>> >> > Running command: /bin/ceph-authtool --gen-print-key
>> >> > Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd
>> >> > --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json
>> >> > Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd
>> >> > --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
>> >> > ff523216-350d-4ca0-9022-0c17662c2c3b 10
>> >> > Running command: vgcreate --force --yes
>> >> > ceph-459b4fbe-e3c4-4f28-b58e-3496bf3ea95a /dev/sdc
>> >> >  stdout: Physical volume "/dev/sdc" successfully created.
>> >> >  stdout: Volume group "ceph-459b4fbe-e3c4-4f28-b58e-3496bf3ea95a"
>> >> > successfully created
>> >> > Running command: lvcreate --yes -l 100%FREE -n
>> >> > osd-block-ff523216-350d-4ca0-9022-0c17662c2c3b
>> >> > ceph-459b4fbe-e3c4-4f28-b58e-3496bf3ea95a
>> >> >  stdout: Logical volume
>> >> > "osd-block-ff523216-350d-4ca0-9022-0c17662c2c3b"
>> >> > created.
>> >> > --> blkid could not detect a PARTUUID for device: sdb
>> >> > --> Was unable to complete a new OSD, will rollback changes
>> >> > --> OSD will be destroyed, keeping the ID because it was provided
>> >> > with
>> >> > --osd-id
>> >> > Running command: ceph osd destroy osd.10 --yes-i-really-mean-it
>> >> >  stderr: destroyed osd.10
>> >> > -->  RuntimeError: unable to use device
>> >> >
>> >> >  Note that SDB is the SSD journal.  It has been zapped prior.
>> >>
>> >> I can't see what the actual command you used is, but I am guessing you
>> >> did something like:
>> >>
>> >> ceph-volume lvm prepare --filestore --data /dev/sdb --journal /dev/sdb
>> >>
>> >> Which is not possible. There are a few ways you can do this (see:
>> >> http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#filestore )
>> >>
>> >> With a raw device and a pre-created partition (must have a PARTUUID):
>> >>
>> >>     ceph-volume lvm prepare --data /dev/sdb --journal /dev/sdc1
>> >>
>> >> With LVs:
>> >>
>> >>     ceph-volume lvm prepare --data vg/my-data --journal vg/my-journal
>> >>
>> >> With an LV for data and a partition:
>> >>
>> >>     ceph-volume lvm prepare --data vg/my-data --journal /dev/sdc1
>> >>
>> >> >
>> >> >  What is going wrong, and how can I fix it?
>> >> >
>> >> >  Thank you
>> >> >  R
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > ceph-users mailing list
>> >> > ceph-users@xxxxxxxxxxxxxx
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux