Re: one question about cephfs disk usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 2, 2011 at 12:30 AM, AnnyRen <annyren6@xxxxxxxxx> wrote:
> Total data size in cephfs is 15.5 GB.
>
> I run "ceph -s" to check the file system usage, it shows:
>
> root@MON1:~# ceph -s
> 2011-08-02 15:12:03.476151    pg v946: 1782 pgs: 1782 active+clean;
> 15995 MB data, 33880 MB used, 38859 GB / 40974 GB avail
> 2011-08-02 15:12:03.480076   mds e10: 1/1/1 up {0=a=up:active}
> 2011-08-02 15:12:03.480113   osd e7: 9 osds: 9 up, 9 in
> 2011-08-02 15:12:03.480183   log 2011-08-02 14:12:32.391116 mon0
> 192.168.10.1:6789/0 13 : [INF] mds0 192.168.10.2:6800/1716 up:act
> 2011-08-02 15:12:03.480263   mon e1: 1 mons at {a=192.168.10.1:6789/0}
>
>
> but when I remove all the file in ceph FS using
>
> root@MON1:~# rm -rf /mnt/ceph/0802_*
>
>
> I check ceph data used again with ceph -s
>
> root@MON1:~# ceph -s
> 2011-08-02 15:20:05.265826    pg v958: 1782 pgs: 1782 active+clean;
> 6779 MB data, 15428 MB used, 38877 GB / 40974 GB avail
> 2011-08-02 15:20:05.269863   mds e10: 1/1/1 up {0=a=up:active}
> 2011-08-02 15:20:05.269900   osd e7: 9 osds: 9 up, 9 in
> 2011-08-02 15:20:05.269965   log 2011-08-02 14:12:32.391116 mon0
> 192.168.10.1:6789/0 13 : [INF] mds0 192.168.10.2:6800/1716 up:act
> 2011-08-02 15:20:05.270045   mon e1: 1 mons at {a=192.168.10.1:6789/0}
>
> Make sure that all files are deleted....
>
> root@MON1:~# ls -lh /mnt/ceph/
> total 0
>
>
> Is there anyone know why it still shows {6779 MB data, 15428 MB used}  ?
> 6779 MB data means?
>
> Thanks for your help. :-)
>
> Best Regard,
> Anny

When you delete a file, it doesn't actually clear out the data right
away for a couple reasons[1]. Instead, it's marked as deleted on the
MDS and the MDS goes through and removes the objects storing it as
time is available. If you clear out the whole FS, this can naturally
take awhile since it requires a number of messages proportional to
amount of data in the cluster. If you look at your data usage again
you'll probably see it's lower now.
Some of the space is also used by the MDS journals (generally 100MB
for each MDS), and depending on how your storage is set up you might
also be seeing OSD journals in that space used (along with any other
files you have on the same partition as your OSD data store). This
should explain why you've got a bit of extra used space that isn't
just for replicating the FS data. :)
-Greg
[1] Two important reasons. First one is that there might be other
references to the objects in question due to snapshots, in which case
you don't want to erase the data -- especially since the client doing
the erasing might not know about these snapshots.
Second one is that to delete the data you need to send out a message
for each object -- ie, for every file you get one object and for every
file >4MB you get an object for each 4MB (and it's replicated, so
multiply by two or three for everything!). On a large tree this can
take awhile and you might not want the client to be spending its time
and bandwidth on such a user-useless activity.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux