sb creation woes (Was: [git pull] new mount API)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Some musings.  Lets take NFS as an example, because a lot of this
seems to revolve around issues with distributed filesystems.

What's the logical way to configure the thing when we have multiple
servers and multiple mounts per server?  I'd think having two levels
would make sense:

 1) configure a connection (IP address, protocol, options)

 2) mount instances of this connection into the tree

This doesn't fit the current /etc/fstab model: the information
currently contained therein would be split into two distinct sources,
e.g.:

/etc/fs-conn:

foo {
    type=nfs,
    vers=4,
    address = nfs.foo.com,
    rsize=8192,
    wsize=1048576,
};

bar {
    type=nfs,
    vers=3,
    address = nfs.bar.com,
};

/etc/fs-mounts:

foo /one /mnt/foo/one nosuid,nodev,ro
foo /two /mnt/foo/two defaults
bar / /mnt/bar noatime


This closely matches what the kernel currently does (each entry in
fs-conn will create a super block, each entry in fs-mounts will create
a mount), except a certain amount of mind reading by the kernel is
replaced with explicit configuration.

So how can this be presented on the new API?  Simple: just make
fs_context be nameable and persistent.

Where that name resides (i.e. which namespace) is a good question,
that I don't have an answer to.  Perhaps Eric...

Thoughts?

Thanks,
Miklos



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux