Re: ceph-volume error messages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 11, 2017 at 2:26 AM, Martin, Jeremy <jmartin@xxxxxxxx> wrote:
> Hello,
>
>
>
> We are currently doing some evaluations on a few storage technologies and
> ceph has made it on our short list but the issue is we haven’t been able to
> evaluate it as I can’t seem to get it to deploy out.
>
>
>
> Before I spend the time spreading it across some hardware and purchasing the
> product I thought I would try it across a few vms (10 to be accurate: 3
> monitors, 1 admin, and six storage nodes) as this reflects the configuration
> that will be the end hardware for this deployment.  The configuration was
> smooth and without issue until we go to the osd provisioning.  All the nodes
> were done the same: ceph-deploy install ceph-admin ceph-mon1 … ceph-osd1 …
> --release=luminous.
>
>
>
> We created storage on the first osd easy enough: sudo ceph-volume lvm
> prepare --bluestore --data ceph-osd1-sata/store followed by: sudo
> ceph-volume lvm activate --bluestore 0 3af51a23-087c-4e6c-ace9-fbe6c7eb23be
>
>

This is problematic because ceph-deploy didn't have ceph-volume
capabilities until recently, and it required manually adding the
bootstrapping key on remote nodes (which looks like you
figured it out).

The current master branch of ceph-deploy can work with ceph-volume, it
will need a bit of a different input (the API had to change).

I would advice trying out ceph-ansible as well, since that has support
for ceph-volume (it will require to have the vg/lv beforehand though)
>
> All was good at this point the cluster reports ok, then we went to provision
> the second osd out and received a bu.ch of error messages, so thinking I
> messed something up I reformatted the node and redeployed.  Then tried these
> commands and received the same errors: the version is 12.2.2 for reference
> on centos 7 with current updates.
>
>
>
> [cephuser@ceph-osd2 ~]$ sudo ceph-volume lvm prepare --bluestore --data
> ceph-osd2-sata/data
>
>
>
> stderr: 2017-12-11 01:47:11.535543 7fdb35985700 -1 auth: unable to find a
> keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or
> directory
>
> stderr: 2017-12-11 01:47:11.535554 7fdb35985700 -1 monclient: ERROR: missing
> keyring, cannot use cephx for authentication
>
> stderr: 2017-12-11 01:47:11.535555 7fdb35985700  0 librados:
> client.bootstrap-osd initialization error (2) No such file or directory
>
> stderr: [errno 2] error connecting to the cluster
>
> -->  RuntimeError: Unable to create a new OSD id
>
>
>
>
>
> So big question here is why did osd1 receive a key in the keyring file when
> none of the other 5 osd did?  So for kick and giggles I figured I would just
> copy the key over from osd1, seemed to take care of that error message but
> got a bunch more:
>
>
>
>
>
> [cephuser@ceph-osd2 ~]$ sudo scp ceph-osd1:/var/lib/ceph/bootstrap-osd/*
> /var/lib/ceph/bootstrap-osd
>
> [cephuser@ceph-osd2 ~]$ sudo ceph-volume lvm prepare --bluestore --data
> ceph-osd2-sata/data
>
>
>
> Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
>
> Running command: chown -R ceph:ceph /dev/dm-3
>
> Running command: sudo ln -s /dev/ceph-osd2-sata/data
> /var/lib/ceph/osd/ceph-1/block
>
> Running command: sudo ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
> /var/lib/ceph/osd/ceph-1/activate.monmap
>
> stderr: got monmap epoch 1
>
> Running command: ceph-authtool /var/lib/ceph/osd/ceph-1/keyring
> --create-keyring --name osd.1 --add-key
> AQBrKi5ae84UFhAAjyVdMkhsoTYy74Ml0eIobQ==
>
> stdout: creating /var/lib/ceph/osd/ceph-1/keyring
>
> added entity osd.1 auth auth(auid = 18446744073709551615
> key=AQBrKi5ae84UFhAAjyVdMkhsoTYy74Ml0eIobQ== with 0 caps)
>
> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
>
> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
>
> Running command: sudo ceph-osd --cluster ceph --osd-objectstore bluestore
> --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --key
> **************************************** --osd-data
> /var/lib/ceph/osd/ceph-1/ --osd-uuid 1b9b50c8-8daa-4a23-8400-73a006bbc8fa
> --setuser ceph --setgroup ceph
>
> stderr: 2017-12-11 01:49:29.748536 7f5f0d034d00 -1
> bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode
> label at offset 102: buffer::malformed_input: void
> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
> end of struct encoding
>
> stderr: 2017-12-11 01:49:29.749271 7f5f0d034d00 -1
> bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode
> label at offset 102: buffer::malformed_input: void
> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
> end of struct encoding
>
> stderr: 2017-12-11 01:49:29.749631 7f5f0d034d00 -1
> bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode
> label at offset 102: buffer::malformed_input: void
> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
> end of struct encoding
>
> 2017-12-11 01:49:29.749747 7f5f0d034d00 -1
> bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
>
> stderr: 2017-12-11 01:49:31.757501 7f5f0d034d00 -1 key
> AQBrKi5ae84UFhAAjyVdMkhsoTYy74Ml0eIobQ==

Although that last output looks like a bunch of errors, those are
normal and expected from bluestore when an initial deploy is done.

I think we need better wording in ceph-volume to get a final
informational message saying that the process was completed
successfully
>
>
>
>
>
> So the question now becomes what am I missing?  Any ideas or pointers would
> be great.
>
>
>
> Jeremy
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux