Re: Tiobench results LOWER with more threads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 15, 2002 at 10:20:55AM +0200, Vladimir Milovanovic wrote:
> OK, just joined the list and rad the faq, and something caught my eye. 
> Tiobench results are apparently supposed to INCREASE when there are more 
> threads.

No, what gave you that idea?

It is so much easier for the kernel to handle one sequential stream of
I/O, instead of many streams.

If you have more than one stream, you need to seek. Seeking is bad. One
sequential I/O is almost always (with the notable exception of RAID-1
reads) faster in total sustained throughput, and always (as in really
always) faster in per-thread sustained throughput.

> 
> What I have is Tiobench results decreasing with more threads. This is my 
> setup:

Good, tiobench works  :)

> 
> Celeron 633
> 196 MB PC 133
> Adaptec 29160 SCSI controller (PCI)
> 5 IBM Ultrastar 18XP (18gig, SCSI-3) disks hanging off the Adaptec 
> controller
> Red Hat 7.3 Linux (2.4.18-3)
> 
> Experimenting with different RAID configurations, I have found that I 
> can not get more than 32 MB/s from this array with 4 disks, one spare. I 
> have actually found out that the disks set the SCSI bus at 40 MB/s 
> (since the disks are old) and that in RAID 0 it scales well, the speed 
> doubles for two disks, and then the third disk brings in a little more 
> performance, and then things topp off at 32 MB/s with four disks. Adding 
> the fifth disk gains no extra performance.
> 
> Apparently VIA chipsets have problems with PCI bursting, so that is why 
> I can't see the full 40 MB/s. That's fine.

There's some SCSI overhead as well.  And probably you have some RAM
bandwidth limitation also - although that is probably not very important
at the speed you're seeing.  But it all adds up.

> But, my tests with tiobench also show that performance decreases as 
> extra threads are added. I am testing with a file of 800 MB (approx. 4x 
> size of RAM, to get meaningful results) and the decrease with threads, 
> while READING only, is consistent in all RAID levels. The write 
> performance will sometimes increase, sometimes stay the same.
> 
> WHY is this happening? Is it something I have not set up right , or 
> what? I am not so much interested in getting more speed out of these old 
> disks, they are gonna be replaced soon anyway, but I REALLY want to know 
> WHY is this??

If you used SDRAM instead of disks with actual spindles, you should see
almost the same total sustained I/O going from 1 to some handfull or two
of threads.

But you use real disks. Those have heads that need to move when seeking.
The average seek time for your disks is probably around 7 ms. Let's say
you can do 10 MB/sec sequential reading from a disk (low number
probably), then *one* seek costs you the equivalent of 72 kB (low number
again) of transfer that you do not get while seeking.

The kernel will have to do a lot of seeks to satisfy your multiple
readers. That's many times 72kB.  And that's why you're losing
performance using more readers (or writers).

The more you add, the more you lose  :)

-- 
................................................................
:   jakob@unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux