Re: Cloning CentOS workstations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Fri, Sep 13, 2013 at 3:51 PM, Glenn Eychaner <geychaner@xxxxxxx> wrote:
> I manage a set of CentOS operations workstations which are all clones of each
> other (3 "live" and 1 "spare" kept powered down); each has a single drive with
> four partitions (/boot, /, /home, swap). I've already set up cron'd rsync jobs
> to copy the operations accounts between the workstations on a daily basis,
> so that when one fails, it is a simple, quick process to swap in the spare,
> restore the accounts from one of the others, and continue operations. This has
> been successfully tested in practice on more than one occasion.

You might want to consider if anything worth saving really needs to be
stored on the individual workstations.  Could you perhaps mount the
home directories from a reliable server or NAS, or more drastically,
have one or a few multiuser hosts with most users using a remote X
desktop (freenx/NX has pretty good performance).   That doesn't really
eliminate the need for backups/spares but it changes the scope of
things quite a bit.

> However, when I perform system updates (about once a month), I like to create
> a temporary "clone" of the system to an external drive before running the
> update, so that I can simply swap drives or clone back if something goes
> horribly wrong. I have been using "CloneZilla" to do this, but it can take a
> while since it blanks each partition before copying, and requires a system
> shutdown.

Look at 'rear'  (in the epel repo) as a possible alternative. It will
do a tar image backup to an nfs target (with rsync and some other
methods as alternatives) and make a bootable iso with a restore
script.   The big advantage is that you don't have to shut down for
the backup and you also have an opportunity to edit the disk layout
before the restore if you need it.

> Question 1: Would it be sufficient to simply use CloneZilla once to initialize
> the backup drive (or do it manually, but CloneZilla makes it easy-peasy), and
> then use "rsync -aHx --delete" (let me know if I missed an important rsync
> option) to update the clone partitions from then on? I am assuming that the
> MBR typically doesn't get rewritten during system updates, though
> "/etc/grub.conf" obviously does get changed.

I'd expect that to work if the disk is mounted into a different system
and not running directly from it.  Worst case would be you'd have to
boot from a DVD in rescue mode to do a 'grub-install' if it didn't
boot.

> Question 2: Is there a better way to do the above? How do I perform the
> "Voila!" step, i.e. what's the right chainload command for this? Also, the
> chainloaded partitions are logical; is this OK?

The better way is to not treat the images as magical atomic things (or
at least a lot of them) but isolate and back up the user data in a way
that it can be dropped into a freshly installed generic machine.
You can use some automated tools for kickstart boots etc., but as a
starting point think about using the minimal Centos CD followed by
'yum install big_list_of_packages', and then restoring the user data.

> I also have a single off-site NAS disk which contains clones of all the
> critical workstations on-site. Most of them are Macs, so I can use
> sparseimages on the NAS for the clones and get easy-peasy incremental
> clones. I also do this for the Linux box (backing it up incrementally to an
> HFS case-sensitive sparseimage via rsync), but it's (obviously) a bit of a
> kludge.
>
> Question 3: Is there a UNIX equivalent to the Mac sparseimage that I should be
> using for this? ("tar -u" can do it (duh), but then the backup file grows
> without bound.)

If you can get things down to backing up at the file level instead of
full images (or maybe do it besides to keep a history) look at
backuppc.  It will do the backups over rsync and pool all copies of
files with duplicate content whether they are on different machines or
previous backups of the same target.   It will take the least disk
space to keep a fairly long history on-line than anything else and it
is pretty much full-auto once you set it up.   And you can give
machine 'owners' separate logins to its web interface so they can do
their own restores,

-- 
   Les Mikesell
     lesmikesell@xxxxxxxxx
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux