RE: Adaptive throttling for RAID1 background resync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Roberto, I still think the solution you point out has the potential for throttling foreground IOs issued to MD from the filesystem as well as the MD initiated background resyncs. So, I don't want to limit the IO queues, esp since our foreground workload involves a LOT of small random IO.

Thanks
~ Hari

-----Original Message-----
From: rspadim@xxxxxxxxx [mailto:rspadim@xxxxxxxxx] On Behalf Of Roberto Spadim
Sent: Friday, March 18, 2011 4:36 PM
To: Hari Subramanian
Cc: linux-raid@xxxxxxxxxxxxxxx
Subject: Re: Adaptive throttling for RAID1 background resync

hum, it´s not a io queue size (very big ram memory queue) problem?
maybe getting it smaller could help?
resync is something like read here write there, if you have write
problem, read should stop when async writes can´t work more (no ram
memory)
i´m right? if true, that´s why i think queue is a point to check

2011/3/18 Hari Subramanian <hari@xxxxxxxxxx>:
> Roberto, My use case involves both foreground and background resyncs happening at the same time. So, by throttling it at the block or IO queues, I would be limiting my throughout for foreground IOs as well which is undesirable.
>
> ~ Hari
>
> -----Original Message-----
> From: rspadim@xxxxxxxxx [mailto:rspadim@xxxxxxxxx] On Behalf Of Roberto Spadim
> Sent: Friday, March 18, 2011 4:29 PM
> To: Hari Subramanian
> Cc: linux-raid@xxxxxxxxxxxxxxx
> Subject: Re: Adaptive throttling for RAID1 background resync
>
> maybe this could be better solved at queue linux kernel area... at
> elevators or block devices
>
> 2011/3/18 Hari Subramanian <hari@xxxxxxxxxx>:
>> I am hitting an issue when performing RAID1 resync from a replica hosted on a fast disk to one on a slow disk. When resync throughput is set at 20Mbps min and 200Mbps max and we have enough data to resync, I see the kernel running out of memory quickly (within a minute). From the crash dumps, I see that a whole lot (12,000+) of biovec-64s that are active on the slab cache.
>>
>> Our guess is that MD is allowing data to be read from the fast disk at a frequency much higher than what the slow disk is able to write to. This continues for a long time (> 1 minute) in an unbounded fashion resulting in buildup of IOs that are waiting to be written to the disk. This eventually causes the machine to panic (we have panic on OOM selected)
>>
>> From reading the MD and RAID1 resync code, I don't see anything that would prevent something like this from happening. So, we would like to implement something to this effect that adaptively throttles the background resync.
>>
>> Can someone confirm or deny these claims and also the need for a new solution. Maybe I'm missing something that already exists that would give me the adaptive throttling. We cannot make do with the static throttling (sync_speed_max and min) since that would be too difficult to get right for varying IO throughputs form the different RAID1 replicas.
>>
>> Thanks
>> ~ Hari
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux