Hi Greg,
I apologize for the lack of details. To sum up, I check that my image
exists:
$ rbd ls
img0
img1
Then I try to mount it:
$ sudo rbd map img0
rbd: add failed: (22) Invalid argument
When I try the exact same command from the box with version 0.61.9, it
succeeds:
$ rbd ls
img0
img1
$ sudo rbd map img0
$ rbd showmapped
id pool image snap device
0 rbd img0 - /dev/rbd0
I have tried changing the data pool, the image format, the image size. I
checked that the image was not locked, and not mounted anywhere else. I
checked that the rbd kernel module was properly loaded, and I even tried
from another box running 0.71 but I got the same error.
I would love to do more troubleshooting myself, but the "Invalid
argument" error message does not give me much to start with. Any hint?
Best regards,
Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)
On 11/01/2013 06:10 PM, Gregory Farnum wrote:
I think this will be easier to help with if you provide the exact
command you're running. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Fri, Nov 1, 2013 at 3:07 AM, nicolasc <nicolas.canceill@xxxxxxxxxxx> wrote:
Hi every one,
I finally and happily managed to get my Ceph cluster (3 monitors among 8
nodes, each with 9 OSDs) running on version 0.71, but the "rbd map" command
shows a weird behaviour.
I can list pools, create images and snapshots, alleluia!
However, mapping to a device with "rbd map" is not working. When I try this
from one of my nodes, the kernel says:
libceph: bad option at 'rw'
Which "rbd" translates into:
add failed: (22) Invalid argument
Any idea of what that could indicate?
I am using a basic config: no authentication, default crushmap (I just
changed some weights), and basic network config (public net, cluster net). I
have tried both image formats, different sizes and pools.
Moreover, I have a client running rbd from Ceph version 0.61.9, and from
there everything works fine with "rbd map" on the same image. Both nodes
(Ceph 0.61.9 and 0.71) are running Linux kernel 3.2 for Debian.
Hope you can provide some hints. Best regards,
Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com