Re: What's the best hardver for PostgreSQL 8.1?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 26 Dec 2005, Alex Turner wrote:


Yes, but those blocks in RAID 10 are largely irrelevant as they are to
independant disks.  In RAID 5 you have to write parity to an 'active'
drive that is part of the stripe.  (They are irrelevant unless of
course you are maxing out your SCSI bus - yet another reason why SATA
can be faster than SCSI, particularly in RAID 10, every channel is
independant).

I don't understand your 'active' vs 'inactive' drive argument, in raid 1 or 1+0 all drives are active.

with good components you need to worry about maxing out your PCI bus as much as any other one (this type of thing is where the hardware raid has a definante advantage since the card handles the extra I/O, not your system)

Sorry - my math for RAID 5 was a bit off - I don't know why I was
considering only a three dirve situation - which is the worst.  It's
n+1 you are right.  still, for small arrays thats a big penalty.
Still, there is definately a penatly contrary to the assertion of the
orignal poster.

I agree totally that the read+parity-calc+write in the worst case is
totaly bad, which is why I alway recommend people should _never ever_
use RAID 5.   In this day and age of large capacity chassis, and large
capacity SATA drives, RAID 5 is totally inapropriate IMHO for _any_
application least of all databases.

In reality I have yet to benchmark a system where RAID 5 on the same
number of drives with 8 drives or less in a single array beat a RAID
10 with the same number of drives.  I would definately be interested
in a SCSI card that could actualy achieve the theoretical performance
of RAID 5 especially under Linux.

but it's not a 'same number of drives' comparison you should be makeing.

if you have a 8 drive RAID5 array you need to compare it with a 14 drive RAID1/10 array.

With RAID 5 you get to watch you system crumble and fail when a drive
fails and the array goes into a failed state.  It's just not worth it.

speed is worth money (and therefor number of drives) in some cases, but not in all cases. also the speed penalty when you have a raid drive fail varies based on your controller

it's wrong to flatly rule out any RAID configuration, they all have their place and the important thing is to understand what the advantages and disadvantages are for each of them so you can know when to use each one.

for example I have a situation I am looking at where RAID0 is looking appropriate for a database (a multi-TB array that gets completely reloaded every month or so as data expires and new data is loaded from the authoritative source, adding another 16 drives to get redundancy isn't reasonable)

David Lang

Alex.


On 12/26/05, David Lang <dlang@xxxxxxxxxxxx> wrote:
On Mon, 26 Dec 2005, Alex Turner wrote:

It's irrelavent what controller, you still have to actualy write the
parity blocks, which slows down your write speed because you have to
write n+n/2 blocks. instead of just n blocks making the system write
50% more data.

RAID 5 must write 50% more data to disk therefore it will always be slower.

raid5 writes n+1 blocks not n+n/2 (unless n=2 for a 3-disk raid). you can
have a 15+1 disk raid5 array for example

however raid1 (and raid10) have to write 2*n blocks to disk. so if you are
talking about pure I/O needed raid5 wins hands down. (the same 16 drives
would be a 8+8 array)

what slows down raid 5 is that to modify a block you have to read blocks
from all your drives to re-calculate the parity. this interleaving of
reads and writes when all you are logicly doing is writes can really hurt.
(this is why I asked the question that got us off on this tangent, when
doing new writes to an array you don't have to read the blocks as they are
blank, assuming your cacheing is enough so that you can write blocksize*n
before the system starts actually writing the data)

David Lang

Alex.

On 12/25/05, Michael Stone <mstone+postgres@xxxxxxxxx> wrote:
On Sat, Dec 24, 2005 at 05:45:20PM -0500, Ron wrote:
Caches help, and the bigger the cache the better, but once you are
doing enough writes fast enough (and that doesn't take much even with
a few GBs of cache) the recalculate-checksums-and-write-new-ones
overhead will decrease the write speed of real data.  Bear in mind
that the HD's _raw_ write speed hasn't been decreased.  Those HD's
are pounding away as fast as they can for you.  Your _effective_ or
_data level_ write speed is what decreases due to overhead.

You're overgeneralizing. Assuming a large cache and a sequential write,
there's need be no penalty for raid 5. (For random writes you may
need to read unrelated blocks in order to calculate parity, but for
large sequential writes the parity blocks should all be read from
cache.) A modern cpu can calculate parity for raid 5 on the order of
gigabytes per second, and even crummy embedded processors can do
hundreds of megabytes per second. You may have run into some lousy
implementations, but you should be much more specific about what
hardware you're talking about instead of making sweeping
generalizations.

Side Note: people often forget the other big reason to use RAID 10
over RAID 5.  RAID 5 is always only 2 HD failures from data
loss.  RAID 10 can lose up to 1/2 the HD's in the array w/o data loss
unless you get unlucky and lose both members of a RAID 1 set.

IOW, your RAID 10 is only 2 HD failures from data loss also. If that's
an issue you need to go with RAID 6 or add another disk to each mirror.

Mike Stone

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend


---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

              http://archives.postgresql.org





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux