Re: Split RAID: Proposal for archival RAID using incremental batch checksum

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17 December 2014 at 03:19, NeilBrown <neilb@xxxxxxx> wrote:
> On Tue, 16 Dec 2014 21:55:15 +0530 Anshuman Aggarwal
> <anshuman.aggarwal@xxxxxxxxx> wrote:
>
>> On 2 December 2014 at 17:26, Anshuman Aggarwal
>> <anshuman.aggarwal@xxxxxxxxx> wrote:
>> > It works! (Atleast on a sample 5 MB device with 5 x 1MB partitions :-)
>> > will find more space on my drives and do a larger test but don't see
>> > why it shouldn't work)
>> > Here are the following caveats (and questions):
>> > - Neil, like you pointed out, the power of 2 chunk size will probably
>> > need a code change (in the kernel or only in the userspace tool?)
>
> In the kernel too.

Is this something that you would consider implementing soon? Is there
a performance/other impact to any other consideration to remove this
limitation.. could you elaborate on the reason why it was there in the
first place?

If this is a case of patches are welcome, please guide on where to
start looking/working even if its just

>
>> >     - Any performance or other reasons why a terabyte size chunk may
>> > not be feasible?
>
> Not that I can think of.
>
>> > - Implications of safe_mode_delay
>> >     - Would the metadata be updated on the block device be written to
>> > and the parity device as well?
>
> Probably.  Hard to give a specific answer to vague question.

I should clarify.

For example in a 5 device RAID4, lets say block is being written to
device 1 and parity is on device 5 and devices 2,3,4 are sleeping
(spun down). If we set safe_mode_delay to 0 and md decides to update
the parity without involving the blocks on the other 3 devices and
just updates the parity by doing a read, compute, write to device 5
will the metadata be updated on both device 1 and 5 even though
safe_mode_delay is 0?

>
>> >     - If the drive  fails which is the same as the drive being written
>> > to, would that lack of metadata updates to the other devices affect
>> > reconstruction?
>
> Again, to give a precise answer, a detailed question is needed.  Obviously
> any change would have to made in such a way to ensure that things which
> needed to work, did work.

Continuing from the previous example, lets say device 1 fails after a
write which only updated metadata on 1 and 5 while 2,3,4 were
sleeping. In that case to access the data from 1, md will use 2,3,4,5
but will it then update the metadata from 5 onto 2,3,4? I hope I am
making this clear.

>
>
>> > - Adding new devices (is it possible to move the parity to the disk
>> > being added? How does device addition work for RAID4 ...is it added as
>> > a zero-ed out device with parity disk remaining the same)
>
> RAID5 or RAID6 with ALGORITHM_PARITY_0 puts the parity on the early devices.
> Currently if you add a device to such an array ...... I'm not sure what it
> will do.  It should be possible to make it just write zeros out.
>

Once again, is this something that can make its way to your roadmap?
If so, great.. otherwise could you steer me towards where in the md
kernel and mdadm source I should be looking to make these changes.
Thanks again.

>
> NeilBrown
>
>
>> >
>> >
>>
>> Neil, sorry to try to bump this thread. Could you please look over the
>> questions and address the points on the remaining items that can make
>> it a working solution? Thanks
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux