Re: Removable mirror disks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Oct 13, 2013, at 6:53 AM, Phil Turmel <philip@xxxxxxxxxx> wrote:

> Good morning Ethan,
> 
> On 10/12/2013 02:15 PM, Ethan Tira-Thompson wrote:
>> Hi all,
>> 
>> I’m setting up a raid mirror with two disks.  Ideally, I’d like to do
>> this in such a way that I could stop the array, remove a drive, mount
>> it directly on another machine as read-only (no RAID setup), and then
>> put it back in the RAID and re-assemble as if nothing happened.  (Or
>> I could put a new drive in and keep the old one as a snapshot
>> backup.)  It’s a maintenance option, not something I intend to do a
>> lot.
>> 
>> Can I do this?  I’ve tried creating a raid from the root block device
>> (e.g. sdb) and then partitioning and formatting within the RAID, as
>> well as the opposite, partitioning the block device and making a raid
>> of the partition.  Neither of these seems happy if I pull a drive and
>> try to use it directly.  Is that due to the mdadm metadata
>> overwriting/offsetting the filesystem?  Would something like DDF
>> containers solve this?  Or if I shrink the filesystem on a partition
>> (leaving unused space on the partition) and then use metadata version
>> 1.0? (not sure I can do that, everything I’ve seen resizes the
>> partition too)
> 
> It is theoretically possible to do this, and even convenient to leave
> the main system running by appropriate use of a write-intent bitmap.
> However, you can't use mdadm to access the pulled drive, as it will bump
> the event count and cause 'split-brain'.
> 
> If you use metadata 0.9 or 1.0, you can mount the underlying device
> directly.  This is hazardous, even with a read-only mount, as
> filesystems generally do a mini-fsck on any mount (journal replay, etc).
> That makes the copy unusable in the original array.  The original array
> has no way to figure out at re-insertion that this type of corruption
> has happened.
> 
> So the practical answer is *no*, once you access the data on the pulled
> drive.

Yup looks like it works cleanly with metadata 1.0, and thanks for the heads up about the possibility of unexpected low-level writes if I use a drive outside the array.  Hopefully I won’t need to use it like this, but if so I’ll treat it as a one-way export.

For those who follow on a search engine, my general process was:
1. partition drives (1 partition each)
2. mdadm --create /dev/md0 --metadata 1.0 --verbose --level=mirror --raid-devices=2 /dev/sdd1 /dev/sde1
3. mdadm --detail --scan >> /etc/mdadm/mdadm.conf
4. mdadm -As
5. format /dev/md0
6. mount /dev/md0

Drive export test of /dev/sdd1:
1. mdadm --manage /dev/md0 --fail /dev/sdd1
(to ensure we don’t let the drive rejoin the array later)
2. mdadm --manage /dev/md0 --remove /dev/sdd1
(maybe also mdadm --zero-superblock /dev/sdd1)
3. mount /dev/sdd1 (will have to pass -t to manually specify filesystem, otherwise it reports "unknown filesystem type ‘linux_raid_member’")
4. can now use the drive normally, but don’t expect to put it back in the array without a full reconstruction.

Thanks!
 -Ethan

> 
>> An unrelated question: I’ve heard some implementations RAID-1
>> mirroring will load balance reads between the disks at the process
>> level but not striping of reads within a thread?  How does linux raid
>> handle this?  Seems like the kernel could stripe the read requests
>> regardless of being single threaded, but maybe there’s some
>> complication of guaranteeing coherency with writes to each drive?
> 
> Raid 1 just passes complete read requests through the block layer to one
> of the underlying devices, and write requests to all of the underlying
> devices.  So the load balancing happens at the level of complete
> requests.  If a process is multi-threaded and submitting multiple
> simultaneous requests, those will load balance.
> 
> HTH,
> 
> Phil

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux