Re: [ANNOUNCE] util-linux-ng v2.17.1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sex, 2010-02-26 at 16:16 +0100, Karel Zak wrote:
> On Fri, Feb 26, 2010 at 02:18:50PM +0000, Ricardo M. Correia wrote:
> > On Sex, 2010-02-26 at 14:52 +0100, Karel Zak wrote:
> > > Hi Andreas,
> > >  The TYPE is used by mount(8) or fsck(8) if the fstype is not
> > >  explicitly defined by user.
> > > 
> > >  I don't know if anything depends on the TYPE, but I don't see
> > >  /sbin/mount.zfs, so it seems that zfs-fuse guys use something other.
> > 
> > Right, ZFS filesystems are mounted in zfs-fuse automatically when a ZFS
> > pool is imported into the system or manually with the "zfs" command. The
> > latter calls into the zfs-fuse daemon, which issues a fuse_mount() call.
> > This mimics the behavior in the Solaris ZFS implementation.
> 
>  Hmm.. we have udevd, in an ideal world zfs-fuse would be integrated
>  with udev. 

You mean that udev would create a block device for the logical volume
where the filesystem is mounted?

I think this may not be possible or useful, see below.

> > I would expect the /sbin/mount.zfs command to only work when the
> > mountpoint property of a ZFS filesystem is set to 'legacy', otherwise
> > ZFS will usually mount the filesystem by itself in the proper place
> > (which depends on the mountpoint property and the dataset hierarchy
> > within the pool).
> > 
> > Most importantly, I don't think it would be easy to determine which
> > filesystems are inside of a ZFS pool. This would require traversing the
> > dataset hierarchy within a pool, which is very difficult to implement if
> > you don't use the existing ZFS code, especially when you have
> > RAID-Z/Z2/Z3 pools. We'd be better off using the 'zdb' command (which
> > contains an entire implementation of ZFS's DMU code in userspace).
> 
>  Yes, the same "problem" we have with DM/MD/... the solution is to
>  detect that there is any "volume_member" and then use specific tools
>  (dmsetup, cryptsetup, mdadm, ...) to create a virtual mountable
>  device. 

Unfortunately the storage abstraction where ZFS filesystems are created
on don't have the same semantics as logical volumes (in the sense of DM,
LVM, etc), e.g. it has no fixed size (it grows and shrinks as the
filesystem grows and shrinks) and you'd have no way of mapping a logical
offset within the virtual device into a physical offset in the pool
(this is accomplished by block pointers).

The idea is that a ZFS filesystem allocates and deallocates space from a
ZFS pool every time it needs to allocate or free a block in the
filesystem. Each block can have a size from 512 bytes up to 128 KB and
it may be allocated anywhere in the pool, and the way a filesystem
accesses its data is by following block pointers.

So I think there is really no way of presenting a virtual block device
other than the entire pool, since you couldn't even map these virtual
device offsets into anything meaningful (other than the space of the
entire pool..).

> > Not sure if this helps or not for this discussion (more information is
> > never bad, right?) :-)
> 
>  Right. BTW, I assume the same discussion for btrfs ;-)

I have no idea about btrfs.. :)

Thanks,
Ricardo


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux