Re: Size of RBD images

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




>-----Original Message-----
>From: Gruher, Joseph R
>Sent: Tuesday, November 19, 2013 12:24 PM
>To: 'Wolfgang Hennerbichler'; Bernhard Glomm
>Cc: ceph-users@xxxxxxxxxxxxxx
>Subject: RE:  Size of RBD images
>
>So is there any size limit on RBD images?  I had a failure this morning mounting
>1TB RBD.  Deleting now (why does it take so long to delete if it was never even
>mapped, much less written to?) and will retry with smaller images.  See
>output below.  This is 0.72 on Ubuntu 13.04 with 3.12 kernel.
>
>ceph@joceph-client01:~$ rbd info testrbd
>rbd image 'testrbd':
>        size 1024 GB in 262144 objects
>        order 22 (4096 kB objects)
>        block_name_prefix: rb.0.1770.6b8b4567
>        format: 1
>
>ceph@joceph-client01:~$ rbd map testrbd -p testpool01
>rbd: add failed: (13) Permission denied
>
>ceph@joceph-client01:~$ sudo rbd map testrbd -p testpool01
>rbd: add failed: (2) No such file or directory
>
>ceph@joceph-client01:/etc/ceph$ rados df
>pool name       category                 KB      objects       clones     degraded      unfound
>rd        rd KB           wr        wr KB
>data            -                          0            0            0            0           0            0            0            0
>0
>metadata        -                          0            0            0            0           0            0            0            0
>0
>rbd             -                          1            2            0            0           0           10            7            8
>8
>testpool01      -                          0            0            0            0           0            0            0            0
>0
>testpool02      -                          0            0            0            0           0            0            0            0
>0
>testpool03      -                          0            0            0            0           0            0            0            0
>0
>testpool04      -                          0            0            0            0           0            0            0            0
>0
>  total used      2328785160            2
>  total avail     9218978040
>  total space    11547763200
>
>ceph@joceph-client01:/etc/ceph$ sudo modprobe rbd
>
>ceph@joceph-client01:/etc/ceph$ sudo rbd map testrbd --pool testpool01
>rbd: add failed: (2) No such file or directory
>
>ceph@joceph-client01:/etc/ceph$ rbd info testrbd
>rbd image 'testrbd':
>        size 1024 GB in 262144 objects
>        order 22 (4096 kB objects)
>        block_name_prefix: rb.0.1770.6b8b4567
>        format: 1
>

I think I figured out where I went wrong here.  I had thought if you didn't specify the pool on the 'rbd create' command line you could then later map to any pool.  In retrospect that probably doesn't make a lot of sense and it appears if you don't specify the pool at the create step it just defaults to the rbd pool.  See example below.

ceph@joceph-client01:/etc/ceph$ sudo rbd create --size 1048576 testimage5 --pool testpool01
ceph@joceph-client01:/etc/ceph$ sudo rbd map testimage5 --pool testpool01

ceph@joceph-client01:/etc/ceph$ sudo rbd create --size 1048576 testimage6
ceph@joceph-client01:/etc/ceph$ sudo rbd map testimage6 --pool testpool01
rbd: add failed: (2) No such file or directory

ceph@joceph-client01:/etc/ceph$ sudo rbd map testimage6 --pool rbd
ceph@joceph-client01:/etc/ceph$
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux