Re: clone of filesystem across network preserving ionodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> What is the best practice way to get the contents from
> server1 to server2 identical disk array partitions

Here you do not say that «get the contents» has to be over the
network, but the 'Subject:' line seems to indicate that.

> and preserve the ionode information using xfs tools or non-xfs
> tools? xfsdump/xfsrestore appear to not preserve this or I can
> not find the obvious setting.

If «preserve the ionode information» means "keep the same
inumbers for each inode" that seems to me fairly pointless in
almost every situation (except perhaps for forensic purposes, and
that opens a whole series of other issues), so there is no «best
practice way» as there is little or no practice, and management
speak won't change that. More clarity in explaining what you mean
and/or what you want to achieve would be better.

If you want for whatever reason to make a byte-by-byte copy from
block device X (potentially mounted on M) on host S1 to the
corresponding one of «identical disk array partitions» Y on host
S2, which also preserves the inumbers, something fairly common is
to use 'ssh' as a network pipe (run on host S1):

  M="`grep \"^$X\" /proc/mounts | cut -d' ' -f2`"
  case "$M" in ?*) xfs_freeze -f `"$M";; esac

  sudo sh -c "lzop < '$X'" | dd bs=64k iflag=fullblock \
    | ssh -oCipher=arcfour -oCompression=no "$S2" \
        "dd bs=64k iflag=fullblock | sudo sh -c "\'"lzop -d > '$Y'"\'""

  case "$M" in ?*) xfs_freeze -u `"$M";; esac

If 'xfs_freeze' is suitable, and if no interruptions happen; if
they happen put reasonable "seek" and "skip" parameters on 'dd'
and rerun.

If the target block device is bigger than the source block
device, one can use 'xfs_growfs' after the byte-by-byte copy
(growing filetrees has downsides though).  Unless by «preserve
the ionode information» or «identical disk array partitions» you
mean something different from what they appear to mean to me.

The above just copied for me a 88GiB block device in 1624s
between two ordinary home PCs.

One could replace 'ssh' with 'nc' or similar network piping
commands for higher speed (at some cost). I also tried to
interpolate 'xfs_copy -d -b "$X" /dev/stdout | lzop ...'  but
'xfs_copy' won't do that, not entirely unexpectedly.

If by «preserve the ionode information» you simply mean copy all
file attributes but without keeping the inumber (except for
'atime', but is usually disabled anyhow, and there is a patch to
restore it), if the source filetree is mounted at M on S1 and the
target is mounted at N (you may opt to mount with 'nobarrier') on
S2, a fairly common way is to use RSYNC with various preserving
options (run on host S1):

  sudo rsync -i -axAHX --del -z \
     -e 'ssh -oCipher=arcfour -oCompression=no -l root' \
     "$M"/ "$S2":"$N"/

The following is another common way that has slightly different
effects on 'atime' and 'ctime' (requires a patched GNU 'tar' or
be replaced with 'star') by using 'ssh' combines with 'tar':

  sudo sh -c "cd '$M' && tar -c --one --selinux --acls --xattrs -f - ." \
  | lzop | dd bs=64k iflag=fullblock \
    | ssh -oCipher=arcfour -oCompression=no "$S2" \
      "dd bs=64k iflag=fullblock | lzop -d \
        | sudo sh -c 'cd \"$N\" && tar -xv --preserve-p --atime-p -f -'"

> xfs_copy would, per documentation,

Probably 'xfs_copy' is only necessary in the case where you
really want to preserve inums, and the source block device is
larger than the target block device (and obviously the size of
the content is smaller than the size of the target block device).

> but can I get from server1 to server2?

That probably requires finding a target medium (file or block
device) big enough and that can be shared (or transported)
between S1 and S2.

This could involve a filetree on S2 that is exported (for example
by NFSv4) to S1 and large enough to hold the target file(s) of
'xfs_copy'.

PS Whichever way is chosen to the copy, a second run with RSYNC
   options '-n -c' might help verify that corruption did not
   happens during the copy.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs





[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux