Re: Fwd: NAS solution for CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeff,
Another question is about Client Caching when disabling delegation.
I set breakpoint on nfs4_op_read, which is OP_READ process function in
nfs-ganesha. Then I read a file, I found that it will hit only once on
the first time, which means latter reading operation on this file will
not trigger OP_READ. It will read the data from client side cache. Is
it right?
I also checked the nfs client code in linux kernel. Only
cache_validity is NFS_INO_INVALID_DATA, it will send OP_READ again,
like this:
    if (nfsi->cache_validity & NFS_INO_INVALID_DATA) {
        ret = nfs_invalidate_mapping(inode, mapping);
    }
This about this senario, client1 connect ganesha1 and client2 connect
ganesha2. I read /1.txt on client1 and client1 will cache the data.
Then I modify this file on client2. At that time, how client1 know the
file is modifed and how it will add NFS_INO_INVALID_DATA into
cache_validity?
Thanks,
Marvin

On Thu, Feb 14, 2019 at 7:27 PM Jeff Layton <jlayton@xxxxxxxxxxxxxxx> wrote:
>
> On Thu, 2019-02-14 at 10:35 +0800, Marvin Zhang wrote:
> > On Thu, Feb 14, 2019 at 8:09 AM Jeff Layton <jlayton@xxxxxxxxxxxxxxx> wrote:
> > > > Hi,
> > > > As http://docs.ceph.com/docs/master/cephfs/nfs/ says, it's OK to
> > > > config active/passive NFS-Ganesha to use CephFs. My question is if we
> > > > can use active/active nfs-ganesha for CephFS.
> > >
> > > (Apologies if you get two copies of this. I sent an earlier one from the
> > > wrong account and it got stuck in moderation)
> > >
> > > You can, with the new rados-cluster recovery backend that went into
> > > ganesha v2.7. See here for a bit more detail:
> > >
> > > https://jtlayton.wordpress.com/2018/12/10/deploying-an-active-active-nfs-cluster-over-cephfs/
> > >
> > > ...also have a look at the ceph.conf file in the ganesha sources.
> > >
> > > > In my thought, only state consistance should we think about.
> > > > 1. Lock support for Active/Active. Even each nfs-ganesha sever mantain
> > > > the lock state, the real lock/unlock will call
> > > > ceph_ll_getlk/ceph_ll_setlk. So Ceph cluster will handle the lock
> > > > safely.
> > > > 2. Delegation support Active/Active. It's similar question 1,
> > > > ceph_ll_delegation will handle it safely.
> > > > 3. Nfs-ganesha cache support Active/Active. As
> > > > https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf
> > > > describes, we can config cache size as size 1.
> > > > 4. Ceph-FSAL cache support Active/Active. Like other CephFs client,
> > > > there is no issues for cache consistance.
> > >
> > > The basic idea with the new recovery backend is to have the different
> > > NFS ganesha heads coordinate their recovery grace periods to prevent
> > > stateful conflicts.
> > >
> > > The one thing missing at this point is delegations in an active/active
> > > configuration, but that's mainly because of the synchronous nature of
> > > libcephfs. We have a potential fix for that problem but it requires work
> > > in libcephfs that is not yet done.
> > [marvin] So we should disable delegation on active/active and set the
> > conf like this. Is it right?
> > NFSv4
> > {
> > Delegations = false;
> > }
>
> Yes.
> --
> Jeff Layton <jlayton@xxxxxxxxxxxxxxx>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux