Re: Bug #630

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Thu, 2011-02-24 at 15:13 +0800, Henry Chang wrote:
> Hi,
> 
> I want to confirm if the following problem is related to Bug #630.
> 
> Step 1. On client A, create a file of 10G by dd.
> "ceph -w" shows the total data size increased by 10G.
> 
> Step 2. On client B, touch the file.
> 
> Step 3. On client A, remove the file.
> "ceph -w" shows the total data size is NOT decreased.
> 
> Step 4. Umount ceph on client B.
> "ceph -w" shows the total data size decreased by 10G.

I tried your test, I'm seeing the same behaviour. I write a 1GB.bin and
4MB.bin and cloned the linux-2.6 tree, after removing them:

pg v7776: 12336 pgs: 12336 active+clean; 767 MB data, 2925 MB used, 544
GB / 558 GB avail

But there is nothing left on the filesystem.

Looking closer I saw that the pool "data" still contained 16937 objects
with a total size of 662876021 bytes.

The "metadata" pool still contains 957 objects, total size is 132289047
bytes.

132289047 + 662876021 = 758MB

That is almost the same as "ceph -s" is reporting.

I removed those files about 2 hours ago, after that there have been no
I/O's on the filesystem.

My replication is on 3 btw.

Wido

> 

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux