Re: Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 18, 2019 at 2:46 AM Rainer Krienke <krienke@xxxxxxxxxxxxxx> wrote:
>
> Hello,
>
> thanks for your answer, but zapping the disk did not make any
> difference. I still get the same error.  Looking at the debug output I
> found this error message that is probably the root of all trouble:
>
> # ceph-volume lvm prepare --bluestore --data /dev/sdg
> ....
> stderr: 2019-02-18 08:29:25.544 7fdaa50ed240 -1
> bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid

This "unparsable uuid" line is (unfortunately) expected from
bluestore, and will show up when the OSD is being created for the
first time.

The error messaging was improved a bit (see
https://tracker.ceph.com/issues/22285 and PR
https://github.com/ceph/ceph/pull/20090 )

>
> I found the bugreport below that seems to be exactly that problem I have:
> http://tracker.ceph.com/issues/15386

This doesn't look like the same thing, you are hitting an assert:

 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In
function 'virtual int KernelDevice::read(uint64_t, uint64_t,
ceph::bufferlist*, IOContext*, bool)' thread 7f3fcecb3240 time
2019-02-14 13:45:54.841130
 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821:
FAILED assert((uint64_t)r == len)

Which looks like a valid issue to me, might want to go and create a
new ticket in

https://tracker.ceph.com/projects/bluestore/issues/new


>
> However there seems to be no solution  up to now.
>
> Does anyone have more information how to get around this problem?
>
> Thanks
> Rainer
>
> Am 15.02.19 um 18:12 schrieb David Turner:
> > I have found that running a zap before all prepare/create commands with
> > ceph-volume helps things run smoother.  Zap is specifically there to
> > clear everything on a disk away to make the disk ready to be used as an
> > OSD.  Your wipefs command is still fine, but then I would lvm zap the
> > disk before continuing.  I would run the commands like [1] this.  I also
> > prefer the single command lvm create as opposed to lvm prepare and lvm
> > activate.  Try that out and see if you still run into the problems
> > creating the BlueStore filesystem.
> >
> > [1] ceph-volume lvm zap /dev/sdg
> > ceph-volume lvm prepare --bluestore --data /dev/sdg
> >
> > On Thu, Feb 14, 2019 at 10:25 AM Rainer Krienke <krienke@xxxxxxxxxxxxxx
> > <mailto:krienke@xxxxxxxxxxxxxx>> wrote:
> >
> >     Hi,
> >
> >     I am quite new to ceph and just try to set up a ceph cluster. Initially
> >     I used ceph-deploy for this but when I tried to create a BlueStore osd
> >     ceph-deploy fails. Next I tried the direct way on one of the OSD-nodes
> >     using ceph-volume to create the osd, but this also fails. Below you can
> >     see what  ceph-volume says.
> >
> >     I ensured that there was no left over lvm VG and LV on the disk sdg
> >     before I started the osd creation for this disk. The very same error
> >     happens also on other disks not just for /dev/sdg. All the disk have 4TB
> >     in size and the linux system is Ubuntu 18.04 and finally ceph is
> >     installed in version 13.2.4-1bionic from this repo:
> >     https://download.ceph.com/debian-mimic.
> >
> >     There is a VG and two LV's  on the system for the ubuntu system itself
> >     that is installed on two separate disks configured as software raid1 and
> >     lvm on top of the raid. But I cannot imagine that this might do any harm
> >     to cephs osd creation.
> >
> >     Does anyone have an idea what might be wrong?
> >
> >     Thanks for hints
> >     Rainer
> >
> >     root@ceph1:~# wipefs -fa /dev/sdg
> >     root@ceph1:~# ceph-volume lvm prepare --bluestore --data /dev/sdg
> >     Running command: /usr/bin/ceph-authtool --gen-print-key
> >     Running command: /usr/bin/ceph --cluster ceph --name
> >     client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> >     -i - osd new 14d041d6-0beb-4056-8df2-3920e2febce0
> >     Running command: /sbin/vgcreate --force --yes
> >     ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b /dev/sdg
> >      stdout: Physical volume "/dev/sdg" successfully created.
> >      stdout: Volume group "ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b"
> >     successfully created
> >     Running command: /sbin/lvcreate --yes -l 100%FREE -n
> >     osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> >     ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b
> >      stdout: Logical volume "osd-block-14d041d6-0beb-4056-8df2-3920e2febce0"
> >     created.
> >     Running command: /usr/bin/ceph-authtool --gen-print-key
> >     Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
> >     --> Absolute path not found for executable: restorecon
> >     --> Ensure $PATH environment variable contains common executable
> >     locations
> >     Running command: /bin/chown -h ceph:ceph
> >     /dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> >     Running command: /bin/chown -R ceph:ceph /dev/dm-8
> >     Running command: /bin/ln -s
> >     /dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> >     /var/lib/ceph/osd/ceph-0/block
> >     Running command: /usr/bin/ceph --cluster ceph --name
> >     client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> >     mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
> >      stderr: got monmap epoch 1
> >     Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring
> >     --create-keyring --name osd.0 --add-key
> >     AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ==
> >      stdout: creating /var/lib/ceph/osd/ceph-0/keyring
> >     added entity osd.0 auth auth(auid = 18446744073709551615
> >     key=AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ== with 0 caps)
> >     Running command: /bin/chown -R ceph:ceph
> >     /var/lib/ceph/osd/ceph-0/keyring
> >     Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
> >     Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
> >     bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap
> >     --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid
> >     14d041d6-0beb-4056-8df2-3920e2febce0 --setuser ceph --setgroup ceph
> >      stderr: 2019-02-14 13:45:54.788 7f3fcecb3240 -1
> >     bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
> >      stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In
> >     function 'virtual int KernelDevice::read(uint64_t, uint64_t,
> >     ceph::bufferlist*, IOContext*, bool)' thread 7f3fcecb3240 time
> >     2019-02-14 13:45:54.841130
> >      stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821:
> >     FAILED assert((uint64_t)r == len)
> >      stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
> >     mimic (stable)
>
> --
> Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
> 56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312
> Web: http://userpages.uni-koblenz.de/~krienke
> PGP: http://userpages.uni-koblenz.de/~krienke/mypgp.html
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux