Tuning the I/O scheduler for md?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Based on various googled comments I have selected 'deadline' as the
elevator for the disks comprising my md arrays, with no further tuning
yet ... not so stellar :(

Basically concurrent reads (even just 2, even worse with 1 read + 1
write) don't work too well.

Example:

RAID1: I bulk-move some 90GB of files onto the mirror, which takes a
while. Meanwhile I want to do an ls on another directory on the same
mirror. The ls output fits on 20 lines but takes in excess of 10
seconds.

RAID5: That's where the 90GB above are coming from. Concurrently my
music player wants to load a song every 3-5 Minutes from the same
array. Takes 5-10 seconds.

During the transfer I also tried to open a samba share on another disk
which was idle at the time - still took about 3 seconds, as samba
itself is on the mirror.

The bulk transfer (one mv) seems to utterly starve everything else.

CPU load is non-existent, all of 2GB RAM is used for cache, swap is unused.

I realize there's no way to tell the kernel which traffic is
interactive and which is bulk, but I also don't see why deadline
should starve so much.

Any pointers?

Thanks,

Christian
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux