Re: Converting a .90 raid superblock to version 1.0 and whether--size parameter is needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 17 Mar 2015 15:49:36 -0700 Sean Harris <sean@xxxxxxxxxxxxx> wrote:

> Hi Neil and company,
> 
> I have roughly a hundred machines that are still on Debian 5 that are slated for OS upgrades, and a great many of those are running software raid with the 00.90 version superblock. I even have a few machines on debian 7 with the .90 superblock because the boot drive for the machine had been replaced at some point.
> 
> I would like to explore the idea of upgrading/converting the raid superblock on many of those machines to version 1.0, mostly to keep the environment as uniform as possible, and standardize training for my SysAdmins on re-assembling failed raid arrays.
> But I am not sure if it is a safe and necessary, or even a worthwhile endeavor.

Safe:  there is no intrinsic reason for it not to be safe.  Obviously any
   change brings risks, but they can be managed with care.
Necessary: It is only necessary from the technical perspective of md if you
   want to use some features that are only available with 1.x metadata.
   This included bad-block-logs and improved flexibility for reshaping
   RAID5/6/10 arrays.
Worthwhile: That is up to you.  As you say, consistency can be good and
   better familiarity for sysadmins is not a bad thing.

> 
> I wanted to test this on a few non-production machines, but I am not quite sure if I need to specify the --size parameter in the create statement in the attempt.
> 
> I have a bit of a mixed environment, but very early on, the machines initialized with the .90 superblock were all created in the same way ie:
> 
> /sbin/mdadm --create /dev/md10 --bitmap=internal --auto=yes -l 6 -n 15 /dev/sd{b,e,h,k,n,q,t,w,z,ac,af,ai,al,ao,ar}1
> 

I recommend not doing it this way.  If you use mdadm-3.3 or later you can
  mdadm --assemble /dev/md10 --update=metadata ...list.of.devices..

and it will update the metadata from 0.90 to 1.0 and start the array for you.


> 
> Most of the machines I am interested in upgrading are running Debian 5, with LVM and ext4 volumes.
> Some machines use drive partitions, but we stopped using them some time ago and went with the simple notion that if a drive brand was marketed as 1TB, we assumed a size of 1,000,000,000,000 bytes and created the array with that size. That way, any drive manufacturer could be used as a replacement spare without worrying about the variation of their respective drive sizes.
> 
> I think it is safe to upgrade the superblock from .90 to 1.0 because from what I have read, it is the same size, and in the same place as the .90 superblock.

Not correct.  The 0.90 superblock is 4K in size and between 64K and 128K from
the end of the devices.  The 1.0 superblock is 512 bytes and between 4K and
8K from the end of the device.
So the 1.0 superblock uses less space than the 0.90.  But it does reside
entirely with space that is reserved when the 0.90 metadata is in use.


> But I don't know if having drives with or without partitions would make a difference, or if the conversion would need a specific version of mdadm utilized.
> (My hunch is that any layers on top of the raid array are irrelevant to the superblock conversion).

That's correct.


> 
> All arrays are raid 6 with the same number of devices.
> I think what I need to run is something akin to
> 
> /sbin/mdadm --create /dev/md10 -l6  -c64 --metadata=1.0 --assume-clean -n 15 /dev/sd{b,e,h,k,n,q,t,w,z,ac,af,ai,al,ao,ar}1

That would probably work, but again, "--assemble --update=metadata" is your
friend.


> 
> Where md10 is the array, the raid level is 6, 64k chunk size, assume-clean (for a known good pre-existing array), 15 devices (all specified in order), and a metadata of 1.0.
> 
> But I am not sure if I need to insure that --size is calculated and included in the conversion, or if the re-creation of the superblock simply takes care of it.

It probably doesn't matter, certainly not with a chunk size of 64K or larger.


> 
> The reason I am unsure on this point is that with version 1.0 superblock arrays, I always include the --size parameter when re-creating the array with a specified list of devices, which is normally the Used_Dev_Size of the array, divided by 2, (since the value reported by mdadm is 512 byte sectors).
> 
> I am including some information from my test pod. In the case below, there are only 9 devices per array. (But all of my production conversions will be with 15 devices).
> 
> So here are my questions:
> Am I correct in presuming the conversion will work and simply put-in-place an upgraded superblock?
> Also, that it is unnecessary to specify --size for the create command (in this case).
> And also, that I need not worry about the version of mdadm used for the conversion? (in my case it will almost exclusively be version v2.6.7.2 on Debian 5 , or mdadm - v3.2.5 - 18th May 2012 on Debian 7.)
> Other than reading the superblock version after the conversion, is there any other way of verifying the conversion was successful without causing harm?
> (my thoughts were to run FULL raid checks and filesystem checks, and some crc checks to some files).
> 
> Also, assuming I am on the right track and nothing can go wrong, are there any specific recommendations you have for going ahead with the conversions? Or even a recommendation to avoid this endeavor?
> 
> Thanks for your feedback,
> 
> Sean Harris 
> 
> 
> Example from my test machine:
> 
> debian_version
> 5.0.10
> proc/version 
> Linux version 2.6.32-bpo.5-amd64 (Debian 2.6.32-35~bpo50+1) (norbert@xxxxxxxxxxxxx) (gcc version 4.3.2 (Debian 4.3.2-1.1) ) #1 SMP Wed Jul 20 09:10:04 UTC 2011
> mdadm -V
> mdadm - v2.6.7.2 - 14th November 2008

Grab and compile the latest mdadm, and use that  to update the metadata.

NeilBrown

Attachment: pgpUlWAQ5FUlG.pgp
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux