Re: Some Union Mount questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 21, 2010 at 03:06:30PM -0700, David Brown wrote:
> Okay so its been a while so I figured I'd better update on things.
> 
> It Works!
> 
> I've gotten it to work using Val's documentation at
> http://valerieaurora.org/union/ with some additions to the initramfs
> package to generate the correct initrd
> 
> # cat etc/initramfs-tools/scripts/nfs-bottom/union
> #!/bin/sh
> 
> # init-premount script for lvm2.
> 
> PREREQS=""
> prereqs()
> {
> 	echo $PREREQS
> }
> 
> case "$1" in
> 	prereqs)
> 	prereqs
> 	exit 0
> 	;;
> esac
> 
> echo "mounting union tmpfs"
> mount.union -n -o union -t tmpfs none ${rootmnt}
> # cat etc/initramfs-tools/hooks/union
> #!/bin/sh
> 
> PREREQ=""
> 
> prereqs()
> {
> 	echo "$PREREQ"
> }
> 
> case $1 in
> prereqs)
> 	prereqs
> 	exit 0
> 	;;
> esac
> 
> . /usr/share/initramfs-tools/hook-functions
> 
> copy_exec /bin/mount /bin/mount.union
> #
> 
> Then it works pretty good. (I really like debians initramfs package).
> 
> I can stand up an 8 node cluster with an nfs read only root file
> system and it works!

Great news!  Thanks for sharing your scripts!

> However, the output of the kernel is very verbose lots of debug
> messages about how you are managing the tmpfs part of the union. Also

There should be a separate debugging patch that you can remove.

> lots of duplicate entries, which seems weird.
> 
> May 21 21:53:24 x6 kernel: [   56.558749] lib: appending to union
> 
> Does this mean you are appending /lib to the union tmpfs over and over?

No, it's not - this is a misleading debugging message.  We do call
append_to_union() each time we lookup a unioned directory, even if we
already constructed the union stack.  It bails out if it finds the
union has already been created.  The next rewrite will not do this.

> So df doesn't show any leaking of the processes running on the system
> syslog is going over the wire to a central server, there's 7 agetty's
> running, udev is kinda freaking out (kernel version difference I
> guess), and cron is running. So the image is pretty stripped down.
> This is pretty usual for an HPC compute node. I'm currently doing some
> long tests trying to determine how fast would tmpfs fill up and what
> would be filling it up.
> 
> Also, when is it going to be included to mainline stuff? Also, should
> I put a redhat bugzilla request in for inclusion into RHEL 6.1/2?

I still don't know when it will be in mainline.  I got a code review
from Al and I am rewriting to his specifications (this is the third
version).

> Sadly, this wonderful feature would be more easily adopted here if it
> was included into RHEL 6.1 or 6.2 at some point.

Backporting to RHEL 6.1 or 6.2 is part of the plan at this point.  I'm
not looking forward to it, but that's life at a distro. :)

Thanks for testing!

-VAL
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux