On Mon, Aug 3, 2015 at 5:50 AM, Goncalo Borges <goncalo@xxxxxxxxxxxxxxxxxxx> wrote:
Dear CephFS gurus...
I forgot to mention in my previous email that I do understand the deletions may take a while to perform since they are completed in the background by MDS.
The delay in deletions should be unrelated, as although purging files happens in a queue, the actual unlink of the file from its original location is immediate.
However, please note that in my example I am not doing only deletions but also creating and updating files, which afaiu, should have an immediate effect (let us say a couple of seconds) in the system. This is not what I am experiencing where sometimes, my perception is that sizes are never updated until a new operation is triggered
I'm not sure we've really defined what is supposed to trigger recursive statistics (rstat) updates yet: if you're up for playing with this a bit more, it would be useful to check if A) unmounting a client or B) executing "ceph daemon mds.<id> flush journal" causes the stats to be immediately updated. Not suggesting that you should actually have to do those things, but it will give a clearer sense of exactly where we should be updating things more proactively.
Cheers,
John
Cheers
Goncalo
On 08/03/2015 01:20 PM, Goncalo Borges wrote:
Dear CephFS gurus...
I am testing CephFS and, up to now, I am pretty happy how it is performing. Thank you for this nice piece of software.
However, I do have a couple of doubts. Just for reference:
- The OS / kernel version I am using in my client is:
# cat /etc/redhat-release
Scientific Linux release 6.6 (Carbon)
# uname -a- Both my client and the ceph storage cluster are using ceph and cephfs 0.94.2
Linux xxx.my.domain 3.10.84-1.el6.elrepo.x86_64 #1 SMP Sat Jul 11 11:33:48 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa | grep ceph | sort- My client and servers are in the same data center, served by a 10GbE connection.
ceph-0.94.2-0.el6.x86_64
ceph-common-0.94.2-0.el6.x86_64
ceph-fuse-0.94.2-0.el6.x86_64
libcephfs1-0.94.2-0.el6.x86_64
python-cephfs-0.94.2-0.el6.x86_64
My questions regards the space usage reported by CephFS. I am seeing problems with the reported used space. I am not really sure what triggers an update or if the update really happens. Here is a use case I've experienced. Please note that I do have a couple of TB of test data in the filesystem which represents about 15% of my full system, and I wonder if in an empty system you will see such problems.
1) Check the occupation in a given directory
# getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
getfattr: Removing leading '/' from absolute path names
# file: cephfs/objectsize4M_stripeunit512K_stripecount8/
ceph.dir.entries="65"
ceph.dir.files="65"
ceph.dir.layout="stripe_unit=524288 stripe_count=8 object_size=4194304 pool=cephfs_dt"
ceph.dir.rbytes="549755813888"
ceph.dir.rctime="1438568092.09360459179"
ceph.dir.rentries="66"
ceph.dir.rfiles="65"
ceph.dir.rsubdirs="1"
ceph.dir.subdirs="0"
2) Create a 2 char file ("b" and "\n") in the directory. Those 2 chars should occupy 2 bytes
# echo "b" > /cephfs/objectsize4M_stripeunit512K_stripecount8/b.txt
# getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
getfattr: Removing leading '/' from absolute path names
# file: cephfs/objectsize4M_stripeunit512K_stripecount8/
ceph.dir.entries="66"
ceph.dir.files="66"
ceph.dir.layout="stripe_unit=524288 stripe_count=8 object_size=4194304 pool=cephfs_dt"
ceph.dir.rbytes="549755813890"
ceph.dir.rctime="1438568742.09305192950"
ceph.dir.rentries="67"
ceph.dir.rfiles="66"
ceph.dir.rsubdirs="1"
ceph.dir.subdirs="0"
3) Delete the b.txt file.
--> Please note that the space is not recovered!!!
# rm /cephfs/objectsize4M_stripeunit512K_stripecount8/b.txt4) I've created a new file which I called bb.txt with much more chars inside.
rm: remove regular file `/cephfs/objectsize4M_stripeunit512K_stripecount8/b.txt'? y
# getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
getfattr: Removing leading '/' from absolute path names
# file: cephfs/objectsize4M_stripeunit512K_stripecount8/
ceph.dir.entries="65"
ceph.dir.files="65"
ceph.dir.layout="stripe_unit=524288 stripe_count=8 object_size=4194304 pool=cephfs_dt"
ceph.dir.rbytes="549755813890"
ceph.dir.rctime="1438568933.09123552306"
ceph.dir.rentries="66"
ceph.dir.rfiles="65"
ceph.dir.rsubdirs="1"
ceph.dir.subdirs="0"
--> Please note that space reported by cephfs is still the same as in step 3) even after multiple updates to the new bb.txt file
# i=0; while [ $i -lt 1 ]; do echo "1" >> /cephfs/objectsize4M_stripeunit512K_stripecount8/bb.txt; i=`expr $i + 1` ; done(...)
# getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
getfattr: Removing leading '/' from absolute path names
# file: cephfs/objectsize4M_stripeunit512K_stripecount8/
ceph.dir.entries="66"
ceph.dir.files="66"
ceph.dir.layout="stripe_unit=524288 stripe_count=8 object_size=4194304 pool=cephfs_dt"
ceph.dir.rbytes="549755813890"
ceph.dir.rctime="1438569099.09854895368"
ceph.dir.rentries="67"
ceph.dir.rfiles="66"
# i=0; while [ $i -lt 20480 ]; do echo "1" >> /cephfs/objectsize4M_stripeunit512K_stripecount8/bb.txt; i=`expr $i + 1` ; done
# ls -l /cephfs/objectsize4M_stripeunit512K_stripecount8/bb.txt
-rw-r--r-- 1 root root 49156 Aug 3 02:36 /cephfs/objectsize4M_stripeunit512K_stripecount8/bb.txt
# getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/getfattr: Removing leading '/' from absolute path names
# file: cephfs/objectsize4M_stripeunit512K_stripecount8/
ceph.dir.entries="66"
ceph.dir.files="66"
ceph.dir.layout="stripe_unit=524288 stripe_count=8 object_size=4194304 pool=cephfs_dt"
ceph.dir.rbytes="549755813890"
ceph.dir.rctime="1438569099.09854895368"
ceph.dir.rentries="67"
ceph.dir.rfiles="66"
ceph.dir.rsubdirs="1"
ceph.dir.subdirs="0"
5) I've created a new file which I called c.txt.
--> Please note that, after a while, the filesystem is finally updated as the sum of both bb.txt and c.txt
# i=0; while [ $i -lt 20480 ]; do echo "1" >> /cephfs/objectsize4M_stripeunit512K_stripecount8/c.txt; i=`expr $i + 1` ; doneSuggestions? Bug? Comment?
# ls -l /cephfs/objectsize4M_stripeunit512K_stripecount8/c.txt
-rw-r--r-- 1 root root 40960 Aug 3 02:38 /cephfs/objectsize4M_stripeunit512K_stripecount8/c.txt
# getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/getfattr: Removing leading '/' from absolute path names
# file: cephfs/objectsize4M_stripeunit512K_stripecount8/
ceph.dir.entries="67"
ceph.dir.files="67"
ceph.dir.layout="stripe_unit=524288 stripe_count=8 object_size=4194304 pool=cephfs_dt"
ceph.dir.rbytes="549755863046"
ceph.dir.rctime="1438569478.09995047310"
ceph.dir.rentries="68"
ceph.dir.rfiles="67"
ceph.dir.rsubdirs="1"
ceph.dir.subdirs="0"
ceph.dir.rsubdirs="1"
ceph.dir.subdirs="0"
549755863046-549755813890=90116=(49156+40960)
# ls -l /cephfs/objectsize4M_stripeunit512K_stripecount8/bb.txt
-rw-r--r-- 1 root root 49156 Aug 3 02:36 /cephfs/objectsize4M_stripeunit512K_stripecount8/bb.txt
# ls -l /cephfs/objectsize4M_stripeunit512K_stripecount8/c.txt
-rw-r--r-- 1 root root 40960 Aug 3 02:38 /cephfs/objectsize4M_stripeunit512K_stripecount8/c.txt
Cheers
Goncalo
-- Goncalo Borges Research Computing ARC Centre of Excellence for Particle Physics at the Terascale School of Physics A28 | University of Sydney, NSW 2006 T: +61 2 93511937
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-- Goncalo Borges Research Computing ARC Centre of Excellence for Particle Physics at the Terascale School of Physics A28 | University of Sydney, NSW 2006 T: +61 2 93511937
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com