On Wed, May 4, 2016 at 10:06 PM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote: > On Wed, May 4, 2016 at 2:16 AM, Yan, Zheng <ukernel@xxxxxxxxx> wrote: >> On Wed, May 4, 2016 at 4:51 PM, Burkhard Linke >> <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote: >>> Hi, >>> >>> >>> How does CephFS handle locking in case of missing explicit locking control >>> (e.g. flock / fcntl)? And what's the default of mmap'ed memory access in >>> that case? >>> >> >> Nothing special. Actually, I have no idea why using flock improves >> performance. Could you please enable debug and send the log to use. > > Okay, so it sounds like this isn't so much flock file locking as you > added a syscall telling it not to worry about synchronization, and now > you want a way to disable our consistency semantics on <some subset of > files>. Exactly what change did you make to your application, can you > share the key syscall? > > Programmatically you can use the lazyIO flags we have, but I can't > offhand think of anything you can specify per-mount or similar. That's > an interesting request, hmm.... Zheng, Sage, any thoughts? > -Greg We can use mount option or config option to enable it. But O_LAZY hasn't been tested. > >> >> run following commands while your application (without flock) is >> running and send the log to us. >> >> ceph daemon client.xxx config set debug_client 20 >> sleep 30 >> ceph daemon client.xxx config set debug_client 0 >> >> >> >> >> >>> >>> Regards, >>> Burkhard >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com