Re: 2.6.36 io bring the system to its knees

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7 November 2010 02:12, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Sun, Nov 07, 2010 at 01:10:24AM +1100, dave b wrote:
>> I now personally have thought that this problem is the kernel not
>> keeping track of reads vs writers properly Âor not providing enough
>> time to reading processes as writing ones which look like they are
>> blocking the system....
>
> Could be anything from that description....
>
>> If you want to do a simple test do an unlimited dd Â(or two dd's of a
>> limited size, say 10gb) and a find /
>> Tell me how it goes :)
>
> The find runs at IO latency speed while the dd processes run at disk
> bandwidth:
>
> Device:     rrqm/s  wrqm/s   r/s   w/s  ÂrMB/s  ÂwMB/s avgrq-sz avgqu-sz  await Âsvctm Â%util
> vda        0.00   0.00  Â0.00  Â0.00   0.00   0.00   0.00   0.00  Â0.00  0.00  0.00
> vdb        0.00   0.00  58.00 1251.00   0.45  556.54  871.45  Â26.69  20.39  0.72 Â94.32
> sda        0.00   0.00  Â0.00  Â0.00   0.00   0.00   0.00   0.00  Â0.00  0.00  0.00
>
> That looks pretty normal to me for XFS and the noop IO scheduler,
> and there are no signs of latency or interactive problems in
> the system at all. Kill the dd's and:
>
> Device:     rrqm/s  wrqm/s   r/s   w/s  ÂrMB/s  ÂwMB/s avgrq-sz avgqu-sz  await Âsvctm Â%util
> vda        0.00   0.00  Â0.00  Â0.00   0.00   0.00 0.00   0.00  Â0.00  0.00  0.00
> vdb        0.00   0.00 Â214.80  Â0.40   1.68   0.00 15.99   0.33  Â1.54  1.54 Â33.12
> sda        0.00   0.00  Â0.00  Â0.00   0.00   0.00 0.00   0.00  Â0.00  0.00  0.00
>
> And the find runs 3-4x faster, but ~200 iops is about the limit
> I'd expect from 7200rpm SATA drives given a single thread issuing IO
> (i.e. 5ms average seek time).
>
>> ( the system will stall)
>
> No, the system doesn't stall at all. It runs just fine. Sure,
> anything that requires IO on the loaded filesystem is _slower_, but
> if you're writing huge files to it that's pretty much expected. The
> root drive (on a different spindle) is still perfectly responsive on
> a cold cache:
>
> $ sudo time find / -xdev > /dev/null
> 0.10user 1.87system 0:03.39elapsed 58%CPU (0avgtext+0avgdata 7008maxresident)k
> 0inputs+0outputs (1major+844minor)pagefaults 0swap
>
> So what you describe is not a systemic problem, but a problem that
> your system configuration triggers. That's why we need to know
> _exactly_ how your storage subsystem is configured....
>
>> http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/4561
>> iirc can reproduce this on plain ext3.
>
> You're pointing to a "fsync-tester" program that exercises a
> well-known problem with ext3 (sync-the-world-on-fsync). Other
> filesystems do not have that design flaw so don't suffer from
> interactivity problems uner these workloads. ÂAs it is, your above
> dd workload example is not related to this fsync problem, either.
>
> This is what I'm trying to point out - you need to describe in
> significant detail your setup and what your applications are doing
> so we can identify if you are seeing a known problem or not. If you
> are seeing problems as a result of the above ext3 fsync problem,
> then the simple answer is "don't use ext3".

Thank you for your reply.
Well I am not sure :)
Is the answer "don't use ext3" ?
If it is what should I really be using instead?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]