Re: different size of rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 6, 2018 at 3:24 AM Dai Xiang <xiang.dai@xxxxxxxxxxx> wrote:
>
> On Thu, Aug 02, 2018 at 01:04:46PM +0200, Ilya Dryomov wrote:
> > On Thu, Aug 2, 2018 at 12:49 PM <xiang.dai@xxxxxxxxxxx> wrote:
> > >
> > > I create a rbd named dx-app with 500G, and map as rbd0.
> > >
> > > But i find the size is different with different cmd:
> > >
> > > [root@dx-app docker]# rbd info dx-app
> > > rbd image 'dx-app':
> > >     size 32000 GB in 8192000 objects  <====
> > >     order 22 (4096 kB objects)
> > >     block_name_prefix: rbd_data.1206643c9869
> > >     format: 2
> > >     features: layering
> > >     flags:
> > >     create_timestamp: Thu Aug  2 18:18:20 2018
> > >
> > > [root@dx-app docker]# lsblk
> > > NAME                              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> > > vda                               253:0    0    20G  0 disk
> > > └─vda1                            253:1    0    20G  0 part /
> > > vdb                               253:16   0   200G  0 disk
> > > └─vg--test--data-lv--data 252:0    0 199.9G  0 lvm  /test/data
> > > vdc                               253:32   0   200G  0 disk
> > > vdd                               253:48   0   200G  0 disk /pkgs
> > > vde                               253:64   0   200G  0 disk
> > > rbd0                              251:0    0  31.3T  0 disk /test/docker  <====
> > >
> > > [root@dx-app docker]# df -Th
> > > Filesystem                                  Type      Size  Used Avail Use% Mounted on
> > > /dev/vda1                                   xfs        20G   14G  6.5G  68% /
> > > devtmpfs                                    devtmpfs  7.8G     0  7.8G   0% /dev
> > > tmpfs                                       tmpfs     7.8G   12K  7.8G   1% /dev/shm
> > > tmpfs                                       tmpfs     7.8G  3.7M  7.8G   1% /run
> > > tmpfs                                       tmpfs     7.8G     0  7.8G   0% /sys/fs/cgroup
> > > /dev/vde                                    xfs       200G   33M  200G   1% /test/software
> > > /dev/vdd                                    xfs       200G  117G   84G  59% /pkgs
> > > /dev/mapper/vg--test--data-lv--data xfs       200G  334M  200G   1% /test/data
> > > tmpfs                                       tmpfs     1.6G     0  1.6G   0% /run/user/0
> > > /dev/rbd0                                   xfs       500G   34M  500G   1% /test/docker  <====
> > >
> > > Which is true?
> >
> > Did you run "rbd create", "rbd map", "mkfs.xfs" and "mount" by
> > yourself?  If not, how was that mount created?
>
> Yes, i do `rbd create`, `rbd map`, `mkfs.xfs` and `mount`.
>
> I think the different size is that i do `rbd resize 102400T` and
> cancel it.
>
> But the result is not what we want, right?

"rbd resize" resizes only the rbd image itself.  The filesystem needs
to be resized separately.  So if you created the filesystem and _then_
grew the image with "rbd resize", both are true: the old size for XFS
and the new size for the image.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux