Re: CephFS: FSCache: Multiple "user" mounts: leads to kernel crash always

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 23 Jun 2017, at 20:58, David Howells <dhowells@xxxxxxxxxx> wrote:
> 
> Yan, Zheng <zyan@xxxxxxxxxx> wrote:
> 
>> Is it convenience to fix this at fscache/cachefiles layer. The problem we
>> are facing is that each fscache instance needs a unique key. But if we
>> create unique key for each cephfs mount, we can't retain fscache across
>> mounts.
> 
> I don't know enough about how Ceph is organised to say what the exact problem
> is.

The problem is that user mounts the same ceph filesystem multiple times, each with different
mount options. ceph kernel module internally creates several instance mds clients, each mds
client has a fscache instance. If we give the same key to these fscache instances, kernel oops
happen. If we give unique key to each fscache instance, it’s hard to compose a key that allows
retaining fscache across mounts.

Maybe we can add a new mount option to cephfs. the mount option control if cephfs FSID or
‘FSID + SessionID’ is used as fscache key. And we only allow single mds client uses cephfs
FSID as fscache key.

Regards
Yan, Zheng   


> 
> But note that there are reasons I don't currently allow multiple live netfs
> inodes to share a cache object, if that's what you're talking about:
> 
> (1) Namespacing.  An index key in one network namespace may be identical to
>     an index key in another namespace - but because they are in different
>     network namespaces, they don't actually refer to the same remote objects.
> 
>     As I'm not entirely clear on how ceph works, here's an AFS example: I can
>     create two containers, each with a dedicated network card connected to a
>     separate network.  I can create separate cells with the same name on each
>     network and servers with identical addresses, volumes with the same names
>     and I will get files with the same FID.  To fscache they are
>     indistinguishable.
> 
> (2) Coherency.  Say I have two superblocks that refer to the same volume on a
>     server, and say I'm looking at the same file in each.  These two
>     instances of the file have separate local inodes and page caches - even
>     through they may refer to the same remote file.  The VFS and VM don't
>     know that they are the same thing.  inotify/fnotify doesn't know that
>     they're the same thing.
> 
>     So, if I write to one instance of the file, this (a) may not be reflected
>     in the other file and (b) when I push it to the server, the client may
>     get an invalidation request back against the other instance.
> 
>     This is a particular problem for NFS where you can make multiple mounts
>     of the same remote path with different network I/O parameters - and each
>     gives you a different superblock.
> 
> (3) Callbacks.  fscache can make requests of the netfs under some
>     circumstances - but if a cache object is connected to multiple objects,
>     where does it direct its request?
> 
> David

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux