Re: CephFS tar archiving immediately after writing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In searching the code for rbytes it makes a lot of sense how this is useful for quotas in general.  While nothing references this variable in the ceph-fuse code, it is in the general client config options as `client_dirsize_rbytes = false`.  Setting that in the config file and remounting ceph-fuse removed the sizes being displayed from the folders and resolved the errors when immediately tarring a folder after modifying files in it.

Thank you Greg for your help.

On Fri, Sep 7, 2018 at 2:52 PM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
There's an option when mounting the FS on the client to not display those (on the kernel it's "norbytes"; see http://docs.ceph.com/docs/master/man/8/mount.ceph/?highlight=recursive; I didn't poke around to find it on ceph-fuse but it should be there). Calculating them is not very expensive (or at least, the expense is intrinsic to other necessary functions) so you can't disable it on the server.
-Greg

On Fri, Sep 7, 2018 at 11:48 AM David Turner <drakonstein@xxxxxxxxx> wrote:
Is it be possible to disable this feature?  Very few filesystems calculate the size of its folder's contents.  I know I enjoy it in multiple use cases, but there are some use cases where this is not useful and a cause for unnecessary lag/processing.  I'm not certain how this is calculated, but I could imagine some of those use cases with millions of files in cephfs that waste time calculating a folder size that nobody looks at is not ideal.

On Fri, Sep 7, 2018 at 2:11 PM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
Hmm, I *think* this might be something we've seen before and is the result of our recursive statistics (ie, the thing that makes directory sizes reflect the data within them instead of 1 block size). If that's the case it should resolve within a few seconds to maybe tens of seconds under stress?
But there's also some work to force a full flush of those rstats up the tree to enable good differential backups. Not sure what the status of that is.
-Greg

On Fri, Sep 7, 2018 at 11:06 AM David Turner <drakonstein@xxxxxxxxx> wrote:
We have an existing workflow that we've moved from one server sharing a local disk via NFS to secondary servers to all of them mounting CephFS.  The primary server runs a script similar to [1] this, but since we've moved it into CephFS, we get [2] this error.  We added the sync in there to try to help this, but it didn't have an effect.

Does anyone have a suggestion other than looping over a sleep to wait for the tar to succeed?  Waiting just a few seconds to run tar does work, but during a Ceph recovery situation, I can see that needing to be longer and longer.


[1] #!/bin/bash
cp -R /tmp/17857283/db.sql /cephfs/17857283/
sync
tar --ignore-failed-read -cvzf /cephfs/17857283.tgz /cephfs/17857283

[2] tar: Removing leading `/' from member names
/cephfs/17857283/
/cephfs/17857283/db.sql
tar: /cephfs/17857283: file changed as we read it
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux