Re: CephFS - Problems with the reported used space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 4, 2015 at 9:40 AM, Goncalo Borges
<goncalo@xxxxxxxxxxxxxxxxxxx> wrote:
> Hey John...
>
> First of all. thank you for the nice talks you have been giving around.
>
> See the feedback on your suggestions bellow, plus some additional questions.
>
>> However, please note that in my example I am not doing only deletions but
>> also creating and updating files, which afaiu, should have an immediate
>> effect (let us say a couple of seconds) in the system. This is not what I am
>> experiencing where sometimes, my perception is that sizes are never updated
>> until a new operation is triggered
>
>
>
> I'm not sure we've really defined what is supposed to trigger recursive
> statistics (rstat) updates yet: if you're up for playing with this a bit
> more, it would be useful to check if A) unmounting a client or B) executing
> "ceph daemon mds.<id> flush journal" causes the stats to be immediately
> updated.  Not suggesting that you should actually have to do those things,
> but it will give a clearer sense of exactly where we should be updating
> things more proactively.
>
>
> - Remounting the filesystem seems to trigger the update of a directory size.
> Here a simple example:
>
> 1) # getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
> (...)
> ceph.dir.rbytes="549763203076"
>
> 2) # echo "d" > /cephfs/objectsize4M_stripeunit512K_stripecount8/d.txt
>
>
> 3) # getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
> ceph.dir.rbytes="549763203076"   (It was like that for several seconds)
>
> 4) # umount /cephfs; mount -t ceph XX.XX.XX.XX:6789:/  /cephfs -o
> name=admin,secretfile=/etc/ceph/admin.secret
>
> 5) # getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
> (...)
> ceph.dir.rbytes="549763203078"
>
> - However, the flush journal did not had any effect
>
> 1) # getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
> (...)
> ceph.dir.rbytes="549763203079"
>
>
> 2) # echo "ee" > /cephfs/objectsize4M_stripeunit512K_stripecount8/ee.txt
>
> 3) # getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
> (...)
> ceph.dir.rbytes="549763203079" (It was like that for several seconds)
>
> 4) ]# ceph daemon mds.rccephmds flush journal (in mds node)
> {
>     "message": "",
>     "return_code": 0
> }
>
> 5) # getfattr -d -m ceph.* /cephfs/objectsize4M_stripeunit512K_stripecount8/
> (...)
> ceph.dir.rbytes="549763203079"
>
>
> Now my questions:
>
> 1) I performed all of this testing to understand what would be the minimum
> size (reported by df) of a file of 1 char and I am still not able to find a
> clear answer. In a regular posix file system, the size of a 1 char (1 byte)
> file is actually constrained by the filesystem block size. A 1 char file
> would occupy 4 KB in a filesystem configured with a 4 KB blocksize. In ceph
> / cephfs I would expect that a 1 char file would be constrained by the
> Object size  x Number of replicas. However, I was not able to understand the
> numbers I was getting and that was why I started to dig on this topic. Can
> you actually also clarify this question?


rbytes is sum of sizes of file under the directory, nothing do with
block size. If there are sparse files, rbytes of root directory can be
larger then used space of FS,


>
> 2) I have a data and metadata pool,. It is possible to associate a file to
> the object in the data pool via its inode. However, I have failed to find a
> way to associate a file with its metadata pool object. Is there a way to do
> that?
>

object names in both data and metadata pools are in format <inode
number in HEX>.xxxxxxxx. Regular files are stored in data pool, subfix
of object name is stripe number (files is striped into 4M objects).
Directories are stored in metadata pool. In most case, subfix of
object name is "00000000"


> Thanks in Advance
> Cheers
> Goncalo
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux