On Thu, 2008-06-12 at 15:14 -0400, Ross S. W. Walker wrote: > Les Mikesell wrote: > > > You can't beat dd for getting everything exactly the same regardless of > > what you changed - or just splitting the mirrors and letting each sync > > to new partners but then you have to reinstall grub. I prefer > > clonezilla for non-raid configurations but most of the machines I care > > about are configured with raid1. > > Well, actually dd isn't so good in this area. dd will do the whole disk > no matter how much data is actually stored on it and for a 500GB disk > that can take a lot of time. It also doesn't take into consideration > any disk geometry differences. True. But with a small amount of scripting (assuming some experience like I had at the time I did this professionally), you can quickly produce a fairly flexible, automated, fast and reliable process that accomplishes the task. I'll elaborate a little below. > <snip> > -Ross > <snip sig stuff> First, as to speed. Using a blksize= parameter (I used a cyl size as my "standard" unit of transfer), the number of system calls is reduced and the speed of the hardware becomes the limiting factor. Back when I tested this (old slower low-single-digit GB drives, circa 2000-2002), very large speedups were seen. I don't recall the percentages. Second, "copies the whole disk ...". Here is where a small amount of scripting becomes useful. You can copy only specific partitions. If the whole disk is a single partition, some stats gathered by various utilities can determine used counts, and a combination of shrinking the file system (didn't have a shrink ability back when) and (if desired) shrinking the partition can be used to "compact" the source. In conjunction with sfdisk to both gather configuration information and generate new configuration (via scripts), you can reduce the source copied to very close to just that needed. In the final step, this is engendered via the "count=", "skip=" and "seek=" parameters to dd. This was implemented in a NAS product as part of the RAS process that could not predict the specifications of HDs that might be replaced in the field. Lastly, as to geometry differences, again sfdisk is your friend. What I can not address is how to integrate this with raid - no experience there. I do presume that one knowledgeable in that area could also automate that. If this sounds like a lot of work, it's not really. This part of my effort was minimal. The large part was creation of the boot CD, interfacing with the custom hardware timeout facility for auto-reboot to a fallback device and implementing and testing the software install and rejoin with the cluster. Oh... and the constantly changing specs (list of requirements kept changing as they saw opportunities I brought to the table - their *IX experience was quite limited). HTH -- Bill _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos