Re: 2x6 or 3x4 raid10 arrays ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Keld Jørn Simonsen <keld <at> dkuug.dk> writes:

> I believe that a full chunk is read for each read access.

I've read various definitions for "chunk". Is a "stripe" a 'cluster' (in the
"group of disk sectors" meaning) on a single physical drive (device, let's say
"spindle"), and a 'chunk' a set of stripes made with a single stripe of each
spindle? For what I understand this is the definition used in the 'md' world
(see 'man mdadm'), therefore I will use it thereafter.

Yes, AFAIK a full chunk is concerned by each access.

> Most random database reads are much smaller than 256 kiB.
> So the probability that one random read can be done with just one 
> seek + read operation is very high, as far as I understand it.

Indeed. In fact I proposed to define the chunk size with respect to the (known)
average size of read/written data blocks. Most database servers are able to show
this (you let your application run normally for hours/day, then obtain the
information), one can also use some instrumentation (blktrace...)

> This would lead to that it is not important whether to use 
> two arrays of 6 disks each, or 3 arrays of 4 disks each. 
> Or for that sake 1 array of 12 disks.

I beg to disagree. Creating more than one array may be OK when you very
precisely know your load profile per table, but in most cases this is not true,
or this profile will vary, therefore your best bet is "to maintain, for each
request, as much disk heads available as possible", carpet-bomb the array with
all requests and let the elevator(s) optimize. Another way to see it, in some
reciprocal way, is to say that you don't want to have any head sleeping when
there is a request to serve. 

> Some other factors may be more important: such as the ability to survive
> disk crashes

That's very true, however one may not neglect logistics. If I'm pretty sure that
I can change a spindle in less than 2 hours after a failure I will prefer using
all disks less one on a single array and letting the last one as a connected
(but powered off) spare. The alarm trips, some automatic or manual procedure
powers the spare and mounts it in the array, while the procedure aiming at
physically extracting the failed device and replacing it (it will become the new
spare) rolls. With more latency-prone logistics one may reserve more disks as
spares.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux