Re: rbd mapping failes - maybe solved

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Sage,

I just tried various versions from gitbuilder and finally found one that worked ;-)

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/ raring main

looks like it works perfectly, on first glance with much better performance than cuttlefish.

Do you need some test for my problem with 0.67.2-16-gd41cf86?
could do so on monday.

I didn't ran udev nor cat /proc/partitions but checked 
/dev/rbd* -> not present
and 
tree /dev/disk
also not showing any hint for a new device other than my hard disk partitions

Since the dumpling version now seems to work I would otherwise keep using that
to get more familiar with ceph.

Bernhard


*Ecologic Institute* Bernhard Glomm
IT Administration

Phone: +49 (30) 86880 134
Fax: +49 (30) 86880 100
Skype: bernhard.glomm.ecologic
Website: | Video: | Newsletter: | Facebook: | Linkedin: | Twitter: | YouTube: | Google+:
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH


On Aug 30, 2013, at 5:05 PM, Sage Weil <sage@xxxxxxxxxxx> wrote:

Hi Bernhard,

On Fri, 30 Ag 2013, Bernhard Glomm wrote:
Hi all,

due to a problem with ceph-deploy I currently use

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/
raring main
(ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))

Now the initialization of the cluster works like a charm,
ceph health is okay,

Great; this will get backported to dumpling shortly and will be included
in teh 0.67.3 release.

just the mapping of the created rbd is failing.

---------------------
root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool
--yes-i-really-really-mean-it
pool 'kvm-pool' deleted
root@ping[/1]:~ # ceph osd lspools

0 data,1 metadata,2 rbd,
root@ping[/1]:~ #
root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
pool 'kvm-pool' created
root@ping[/1]:~ # ceph osd lspools
0 data,1 metadata,2 rbd,4 kvm-pool,
root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
set pool 4 min_size to 2
root@ping[/1]:~ # ceph osd dump | grep 'rep size'
pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 owner 0
pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 1000 pgp_num 1000 last_change 33 owner 0
root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd ls kvm-pool
atom03.cimg
atom04.cimg
root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
rbd image 'atom03.cimg':
        size 4000 MB in 1000 objects
        order 22 (4096 KB objects)
        block_name_prefix: rb.0.114d.2ae8944a
        format: 1
root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
rbd image 'atom04.cimg':
        size 4000 MB in 1000 objects
        order 22 (4096 KB objects)
        block_name_prefix: rb.0.127d.74b0dc51
        format: 1
root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
rbd: '/sbin/udevadm settle' failed! (256)
root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin
--keyring /etc/ceph/ceph.client.admin.keyring
^Crbd: '/sbin/udevadm settle' failed! (2)
root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring
/etc/ceph/ceph.client.admin.keyring
rbd: '/sbin/udevadm settle' failed! (256)
---------------------

What happens if you run '/sbin/udevadm settle' from the command line?

Also, this the very last step before rbd exits (normally with success), so
my guess is that the rbd mapping actually succeeded.  cat /proc/partitions
or ls /dev/rbd

sage


Do I miss something?
But I think this set of commands worked perfectly with cuttlefish?

TIA

Bernhard

--

____________________________________________________________________________
Bernhard Glomm
IT Administration

Phone:
+49 (30) 86880 134
Fax:
+49 (30) 86880 100
Skype:
bernhard.glomm.ecologic
Website: | Video: | Newsletter: | Facebook: | Linkedin: | Twitter: |
YouTube: | Google+:
Ecologic Institut gemeinn?tzige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
DE811963464
Ecologic? is a Trade Mark (TM) of Ecologic Institut gemeinn?tzige GmbH

____________________________________________________________________________


Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux