Re: Write intent bitmaps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Neil Brown <neilb@xxxxxxx> writes:

> On Thursday June 18, goswin-v-b@xxxxxx wrote:
>> carlos@xxxxxxxxxxxxxx (Carlos Carvalho) writes:
>> >  >2. On a RAID5 or RAID6 array, how much of a performance hit might I expect?
>> >
>> > Depends on the chunk and where the bitmap is. With an internal one the
>> > default chunk will cause a BIG hit. Fortunately it's very easy to try
>> > different settings with the array live, so you can easily revert when
>> > the world suddenly freezes around you... Our arrays are rather busy,
>> > so performance is important and I gave up on it. If you can put it on
>> > other disks I suppose it's possible to find a chunk size compatible
>> > with performance.
>> 
>> Worst case every write to the raid requires a write to the bitmap. So
>> your speed will be ~half. It is not (much) less than half though. You
>> could think that the seek to and from the bitmap must slow things down
>> even more but worst case is random access, which means there already
>> is a seek between each write. The bitmap just adds one write and one
>> seek for each write and seek.
>
> I think half-speed would be very very unlikely.  md tries to gather
> bitmap updates so that - where possible - it might update several bits
> all at once.
>
> I have measured a 10% performance drop.  However it is very dependant
> on workload and, and you say, bitmap chunk size.

>From my tests with internal bitmaps half is what you get with the
default size. At least that was what I got with a software raid over
external raid enclosures. Might be a side effect of bitmap writes not
covering a stripe on the external raid enclosures and them slowing
them down in the hope of getting more data for that stripe. But it was
quite unusable.

>> One benefit of the bitmap during a full resync though is (afaik) that
>> the bitmap (better) indicates the amount done already. If the system
>> crashes and reboots the resync will resume instead of restart.
>
> When you a rebuilding a drive that had failed, we call that "recovery"
> not "resync".
> With 0.90 metadata, a recovery will always restart at the beginning.
> With 1.x metadata, we checkpoint the recovery so it won't duplicated
> very much work.
>
>
> NeilBrown

One of these days I have to redo my home raids with newer metadata.

MfG
        Goswin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux