*terrible* direct-write performance with raid5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When debugging some other problem, I noticied that
direct-io (O_DIRECT) write speed on a software raid5
is terrible slow.  Here's a small table just to show
the idea (not numbers by itself as they vary from system
to system but how they relate to each other).  I measured
"plain" single-drive performance (sdX below), performance
of a raid5 array composed from 5 sdX drives, and ext3
filesystem (the file on the filesystem was pre-created
during tests).  Speed measurements performed with 8Kbyte
buffer aka write(fd, buf, 8192*1024), units a Mb/sec.

          write   read
  sdX      44.9   45.5
  md        1.7*  31.3
fs on md    0.7*  26.3
fs on sdX  44.7   45.3

"Absolute winner" is a filesystem on top of a raid5 array:
700 kilobytes/sec, sorta like a 300-megabyte ide drive some
10 years ago...

The raid5 array built with mdadm with default options, aka
Layout = left-symmetric, Chunk Size = 64K.  The same test
with raid0 or raid1 for example shows quite good performance
(still not perfect but *much* better than for raid5).

It's quite interesting how I/O speed is different for fs on md
vs fs on sdX case - with fs on sdX, filesystem code adds almost
nothing to the plain partition speed, while it makes alot of
difference when used on top of an md device.

Comments anyone? ;)

Thanks.

/mjt
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux