I create a rbd named dx-app with 500G, and map as rbd0.
But i find the size is different with different cmd:
[root@dx-app docker]# rbd info dx-app
rbd image 'dx-app':
size 32000 GB in 8192000 objects <====
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1206643c9869
format: 2
features: layering
flags:
create_timestamp: Thu Aug 2 18:18:20 2018
rbd image 'dx-app':
size 32000 GB in 8192000 objects <====
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1206643c9869
format: 2
features: layering
flags:
create_timestamp: Thu Aug 2 18:18:20 2018
[root@dx-app docker]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 20G 0 disk
└─vda1 253:1 0 20G 0 part /
vdb 253:16 0 200G 0 disk
└─vg--test--data-lv--data 252:0 0 199.9G 0 lvm /test/data
vdc 253:32 0 200G 0 disk
vdd 253:48 0 200G 0 disk /pkgs
vde 253:64 0 200G 0 disk
rbd0 251:0 0 31.3T 0 disk /test/docker <====
[root@dx-app docker]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/vda1 xfs 20G 14G 6.5G 68% /
devtmpfs devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs tmpfs 7.8G 12K 7.8G 1% /dev/shm
tmpfs tmpfs 7.8G 3.7M 7.8G 1% /run
tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/vde xfs 200G 33M 200G 1% /test/software
/dev/vdd xfs 200G 117G 84G 59% /pkgs
/dev/mapper/vg--test--data-lv--data xfs 200G 334M 200G 1% /test/data
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/0
/dev/rbd0 xfs 500G 34M 500G 1% /test/docker <====
Which is true?
Is it normal?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com