Issues with large chunk size (16Mb)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've set up server with big amount of disk space available (10x10Tb HDDs).
This server should deliver (over HTTP) files to many clients, usual
file size is several Mb.
I use RAID 6 and XFS.
I make a decision to make chunk size as large as possible.

My reasoning is:
HDD performance is mostly limited by seeks.
With default chunk size (512Kb) reading of 4Mb file touches 8 HDDs (8 seeks)
With large chunk size only one HDD is touched (1 seek).

So I created array with maximal possible chunk size (16Mb).
And I have an issues with this array.
https://bugzilla.kernel.org/show_bug.cgi?id=201331

I have another server with similar setup. I did some tests on it.
As expected large chunk size provides significantly better
multithreaded large block read performance.
But write performance drops with chunk size over 4Mb.
So I set up second server with chunk size 4Mb. And I have no such
deadlocks with this server.

Now I tried do change chunks size on the first server, but no success:
# mdadm --grow /dev/md3 --chunk=4096  --backup-file=/home/md3-backup
chunk size for /dev/md3 set to 16777216

(and no changes in /proc/mdstat)

I have some questions:
1. Is deadlock under load an expected behavior with 16Mb chunk size?
Or it is a bug and should be fixed?
2. Is it possible to reshape existing RAID with smaller chunk size?
(without data loss)
3. Why chunk size over 4Mb causes bad write performance?



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux