copying files from one pool to another results in more free space?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All
	I'm observing some weird behavior in the amount of space ceph reports 
while copying files from an rbd image in one pool to an rbd image in another.  
The  AVAIL number reported by 'ceph df' goes up as the copy proceeds rather 
than goes down!
	The output of 'ceph df'  shows that the AVAIL space is 9219G initially 
and then after the copy has proceeded for some time shows 9927G.  (See below.)
	Ceph is somehow reclaiming ~700GB of space when it should only be losing 
space!  I like it!  :)
	Some details:
	I ran fstrim on the mount points of both source and destinatino mount 
points.
	The data is being copied from tibs/tibs-ecpool to 3-replica/3-replica-ec. 
tibs is a 3 replica pool which is backed by tibs-ecpool  which is a k2m2 
erasure coded pool.  3-replica/3-replica-ec is the same arrangement, but with 
fewer PGs.
	No data is being deleted.
	I see that USED in 3-replica/3-replica-ec is going up as expected.
	But the USED in both tibs/tibs-ecpool is going down.  This appears to be 
from where the space is being reclaimed.

	Question: why is a read of data causing it to take up less space?

Thanks!
Chad.

# ceph df
GLOBAL:
    SIZE       AVAIL     RAW USED     %RAW USED
    22908G     9219G       13689G         59.76
POOLS:
    NAME                ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd                 13        281         0         1990G           3
    tibs                22     72724M      0.31         1990G      612278
    tibs-ecpool         23      4555G     19.88         2985G     1166644
    cephfs_data         27          8         0         2985G           2
    cephfs_metadata     28     34800k         0         1990G          28
    3-replica           31       745G      3.25         1990G     5516809
    3-replica-ec        32       942G      4.12         2985G      241626

---------------------------- copy progresses ---------------------------------

# ceph df
GLOBAL:
    SIZE       AVAIL     RAW USED     %RAW USED 
    22908G     9927G       12980G         56.66 
POOLS:
    NAME                ID     USED       %USED     MAX AVAIL     OBJECTS 
    rbd                 13        281         0         2284G           3 
    tibs                22     68456M      0.29         2284G       88561 
    tibs-ecpool         23      4227G     18.45         3427G     1082734 
    cephfs_data         27          8         0         3427G           2 
    cephfs_metadata     28     34832k         0         2284G          28 
    3-replica           31       745G      3.25         2284G     2676207 
    3-replica-ec        32       945G      4.13         3427G      242194 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux