Re: Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have found that running a zap before all prepare/create commands with ceph-volume helps things run smoother.  Zap is specifically there to clear everything on a disk away to make the disk ready to be used as an OSD.  Your wipefs command is still fine, but then I would lvm zap the disk before continuing.  I would run the commands like [1] this.  I also prefer the single command lvm create as opposed to lvm prepare and lvm activate.  Try that out and see if you still run into the problems creating the BlueStore filesystem.

[1] ceph-volume lvm zap /dev/sdg
ceph-volume lvm prepare --bluestore --data /dev/sdg

On Thu, Feb 14, 2019 at 10:25 AM Rainer Krienke <krienke@xxxxxxxxxxxxxx> wrote:
Hi,

I am quite new to ceph and just try to set up a ceph cluster. Initially
I used ceph-deploy for this but when I tried to create a BlueStore osd
ceph-deploy fails. Next I tried the direct way on one of the OSD-nodes
using ceph-volume to create the osd, but this also fails. Below you can
see what  ceph-volume says.

I ensured that there was no left over lvm VG and LV on the disk sdg
before I started the osd creation for this disk. The very same error
happens also on other disks not just for /dev/sdg. All the disk have 4TB
in size and the linux system is Ubuntu 18.04 and finally ceph is
installed in version 13.2.4-1bionic from this repo:
https://download.ceph.com/debian-mimic.

There is a VG and two LV's  on the system for the ubuntu system itself
that is installed on two separate disks configured as software raid1 and
lvm on top of the raid. But I cannot imagine that this might do any harm
to cephs osd creation.

Does anyone have an idea what might be wrong?

Thanks for hints
Rainer

root@ceph1:~# wipefs -fa /dev/sdg
root@ceph1:~# ceph-volume lvm prepare --bluestore --data /dev/sdg
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
-i - osd new 14d041d6-0beb-4056-8df2-3920e2febce0
Running command: /sbin/vgcreate --force --yes
ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b /dev/sdg
 stdout: Physical volume "/dev/sdg" successfully created.
 stdout: Volume group "ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b"
successfully created
Running command: /sbin/lvcreate --yes -l 100%FREE -n
osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b
 stdout: Logical volume "osd-block-14d041d6-0beb-4056-8df2-3920e2febce0"
created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
--> Absolute path not found for executable: restorecon
--> Ensure $PATH environment variable contains common executable locations
Running command: /bin/chown -h ceph:ceph
/dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
Running command: /bin/chown -R ceph:ceph /dev/dm-8
Running command: /bin/ln -s
/dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
/var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
 stderr: got monmap epoch 1
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring
--create-keyring --name osd.0 --add-key
AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ==
 stdout: creating /var/lib/ceph/osd/ceph-0/keyring
added entity osd.0 auth auth(auid = 18446744073709551615
key=AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ== with 0 caps)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap
--keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid
14d041d6-0beb-4056-8df2-3920e2febce0 --setuser ceph --setgroup ceph
 stderr: 2019-02-14 13:45:54.788 7f3fcecb3240 -1
bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In
function 'virtual int KernelDevice::read(uint64_t, uint64_t,
ceph::bufferlist*, IOContext*, bool)' thread 7f3fcecb3240 time
2019-02-14 13:45:54.841130
 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821:
FAILED assert((uint64_t)r == len)
 stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)
 stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int,
char const*)+0x102) [0x7f3fc60d33e2]
 stderr: 2: (()+0x26d5a7) [0x7f3fc60d35a7]
 stderr: 3: (KernelDevice::read(unsigned long, unsigned long,
ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x561371346817]
 stderr: 4: (BlueFS::_read(BlueFS::FileReader*,
BlueFS::FileReaderBuffer*, unsigned long, unsigned long,
ceph::buffer::list*, char*)+0x435) [0x5613713065c5]
 stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x56137130c434]
 stderr: 6: (BlueFS::mount()+0x1f1) [0x561371310c81]
 stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x56137123704d]
 stderr: 8: (BlueStore::mkfs()+0x805) [0x561371267fe5]
 stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x561370e10480]
 stderr: 10: (main()+0x4222) [0x561370cf7462]
 stderr: 11: (__libc_start_main()+0xe7) [0x7f3fc3695b97]
 stderr: 12: (_start()+0x2a) [0x561370dc095a]
 stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>`
is needed to interpret this.
 stderr: 2019-02-14 13:45:54.840 7f3fcecb3240 -1
/build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In function
'virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*,
IOContext*, bool)' thread 7f3fcecb3240 time 2019-02-14 13:45:54.841130
 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821:
FAILED assert((uint64_t)r == len)
 stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)
 stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int,
char const*)+0x102) [0x7f3fc60d33e2]
 stderr: 2: (()+0x26d5a7) [0x7f3fc60d35a7]
 stderr: 3: (KernelDevice::read(unsigned long, unsigned long,
ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x561371346817]
 stderr: 4: (BlueFS::_read(BlueFS::FileReader*,
BlueFS::FileReaderBuffer*, unsigned long, unsigned long,
ceph::buffer::list*, char*)+0x435) [0x5613713065c5]
 stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x56137130c434]
 stderr: 6: (BlueFS::mount()+0x1f1) [0x561371310c81]
 stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x56137123704d]
 stderr: 8: (BlueStore::mkfs()+0x805) [0x561371267fe5]
 stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x561370e10480]
 stderr: 10: (main()+0x4222) [0x561370cf7462]
 stderr: 11: (__libc_start_main()+0xe7) [0x7f3fc3695b97]
 stderr: 12: (_start()+0x2a) [0x561370dc095a]
 stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>`
is needed to interpret this.
 stderr: -15> 2019-02-14 13:45:54.788 7f3fcecb3240 -1
bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
 stderr: 0> 2019-02-14 13:45:54.840 7f3fcecb3240 -1
/build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In function
'virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*,
IOContext*, bool)' thread 7f3fcecb3240 time 2019-02-14 13:45:54.841130
 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821:
FAILED assert((uint64_t)r == len)
 stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)
 stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int,
char const*)+0x102) [0x7f3fc60d33e2]
 stderr: 2: (()+0x26d5a7) [0x7f3fc60d35a7]
 stderr: 3: (KernelDevice::read(unsigned long, unsigned long,
ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x561371346817]
 stderr: 4: (BlueFS::_read(BlueFS::FileReader*,
BlueFS::FileReaderBuffer*, unsigned long, unsigned long,
ceph::buffer::list*, char*)+0x435) [0x5613713065c5]
 stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x56137130c434]
 stderr: 6: (BlueFS::mount()+0x1f1) [0x561371310c81]
 stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x56137123704d]
 stderr: 8: (BlueStore::mkfs()+0x805) [0x561371267fe5]
 stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x561370e10480]
 stderr: 10: (main()+0x4222) [0x561370cf7462]
 stderr: 11: (__libc_start_main()+0xe7) [0x7f3fc3695b97]
 stderr: 12: (_start()+0x2a) [0x561370dc095a]
 stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>`
is needed to interpret this.
 stderr: *** Caught signal (Aborted) **
 stderr: in thread 7f3fcecb3240 thread_name:ceph-osd
 stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)
 stderr: 1: (()+0x92aa40) [0x561371357a40]
 stderr: 2: (()+0x12890) [0x7f3fc47d7890]
 stderr: 3: (gsignal()+0xc7) [0x7f3fc36b2e97]
 stderr: 4: (abort()+0x141) [0x7f3fc36b4801]
 stderr: 5: (ceph::__ceph_assert_fail(char const*, char const*, int,
char const*)+0x250) [0x7f3fc60d3530]
 stderr: 6: (()+0x26d5a7) [0x7f3fc60d35a7]
 stderr: 7: (KernelDevice::read(unsigned long, unsigned long,
ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x561371346817]
 stderr: 8: (BlueFS::_read(BlueFS::FileReader*,
BlueFS::FileReaderBuffer*, unsigned long, unsigned long,
ceph::buffer::list*, char*)+0x435) [0x5613713065c5]
 stderr: 9: (BlueFS::_replay(bool, bool)+0x214) [0x56137130c434]
 stderr: 10: (BlueFS::mount()+0x1f1) [0x561371310c81]
 stderr: 11: (BlueStore::_open_db(bool, bool)+0x17cd) [0x56137123704d]
 stderr: 12: (BlueStore::mkfs()+0x805) [0x561371267fe5]
 stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x561370e10480]
 stderr: 14: (main()+0x4222) [0x561370cf7462]
 stderr: 15: (__libc_start_main()+0xe7) [0x7f3fc3695b97]
 stderr: 16: (_start()+0x2a) [0x561370dc095a]
 stderr: 2019-02-14 13:45:54.840 7f3fcecb3240 -1 *** Caught signal
(Aborted) **
 stderr: in thread 7f3fcecb3240 thread_name:ceph-osd
 stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)
 stderr: 1: (()+0x92aa40) [0x561371357a40]
 stderr: 2: (()+0x12890) [0x7f3fc47d7890]
 stderr: 3: (gsignal()+0xc7) [0x7f3fc36b2e97]
 stderr: 4: (abort()+0x141) [0x7f3fc36b4801]
 stderr: 5: (ceph::__ceph_assert_fail(char const*, char const*, int,
char const*)+0x250) [0x7f3fc60d3530]
 stderr: 6: (()+0x26d5a7) [0x7f3fc60d35a7]
 stderr: 7: (KernelDevice::read(unsigned long, unsigned long,
ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x561371346817]
 stderr: 8: (BlueFS::_read(BlueFS::FileReader*,
BlueFS::FileReaderBuffer*, unsigned long, unsigned long,
ceph::buffer::list*, char*)+0x435) [0x5613713065c5]
 stderr: 9: (BlueFS::_replay(bool, bool)+0x214) [0x56137130c434]
 stderr: 10: (BlueFS::mount()+0x1f1) [0x561371310c81]
 stderr: 11: (BlueStore::_open_db(bool, bool)+0x17cd) [0x56137123704d]
 stderr: 12: (BlueStore::mkfs()+0x805) [0x561371267fe5]
 stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x561370e10480]
 stderr: 14: (main()+0x4222) [0x561370cf7462]
 stderr: 15: (__libc_start_main()+0xe7) [0x7f3fc3695b97]
 stderr: 16: (_start()+0x2a) [0x561370dc095a]
 stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>`
is needed to interpret this.
 stderr: 0> 2019-02-14 13:45:54.840 7f3fcecb3240 -1 *** Caught signal
(Aborted) **
 stderr: in thread 7f3fcecb3240 thread_name:ceph-osd
 stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)
 stderr: 1: (()+0x92aa40) [0x561371357a40]
 stderr: 2: (()+0x12890) [0x7f3fc47d7890]
 stderr: 3: (gsignal()+0xc7) [0x7f3fc36b2e97]
 stderr: 4: (abort()+0x141) [0x7f3fc36b4801]
 stderr: 5: (ceph::__ceph_assert_fail(char const*, char const*, int,
char const*)+0x250) [0x7f3fc60d3530]
 stderr: 6: (()+0x26d5a7) [0x7f3fc60d35a7]
 stderr: 7: (KernelDevice::read(unsigned long, unsigned long,
ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x561371346817]
 stderr: 8: (BlueFS::_read(BlueFS::FileReader*,
BlueFS::FileReaderBuffer*, unsigned long, unsigned long,
ceph::buffer::list*, char*)+0x435) [0x5613713065c5]
 stderr: 9: (BlueFS::_replay(bool, bool)+0x214) [0x56137130c434]
 stderr: 10: (BlueFS::mount()+0x1f1) [0x561371310c81]
 stderr: 11: (BlueStore::_open_db(bool, bool)+0x17cd) [0x56137123704d]
 stderr: 12: (BlueStore::mkfs()+0x805) [0x561371267fe5]
 stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, uuid_d, int)+0x1b0) [0x561370e10480]
 stderr: 14: (main()+0x4222) [0x561370cf7462]
 stderr: 15: (__libc_start_main()+0xe7) [0x7f3fc3695b97]
 stderr: 16: (_start()+0x2a) [0x561370dc095a]
 stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>`
is needed to interpret this.
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
osd purge-new osd.0 --yes-i-really-mean-it
 stderr: purged osd.0
-->  RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd
--cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap
/var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data
/var/lib/ceph/osd/ceph-0/ --osd-uuid
14d041d6-0beb-4056-8df2-3920e2febce0 --setuser ceph --setgroup ceph
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312
Web: http://userpages.uni-koblenz.de/~krienke
PGP: http://userpages.uni-koblenz.de/~krienke/mypgp.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux