Re: more 10K disks or less 15K disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 29, 2010 at 1:51 PM, Scott Whitney <scott@xxxxxxxxxxx> wrote:
> During the testing that I did when moving from pg7 to pg8 a few years back, I didn't notice any particular performance
> increase on a similarly-configured server.
>
> That is, we've got 14 disks (15k rpm) striped in a single RAID10 array. Moving the logs to an internal RAID
> versus leaving them on the "main" storage array didn't impact my performance noticeably either way.
>
> Now, please note:
> SUCH A STATEMENT DEPENDS ENTIRELY ON YOUR USE-CASE SCENARIO.
>
> That is, because these tests which I performed showed that for me, if you're using PG in a much different
> manner, you may have different results. It's also quite possible that our 50% expansion in the past 3
> years has had some effect, but I'm not in a place to retest that at this time.
>
> We specifically chose to put our logs on the fiber SAN in case the underlying machine went down.
> Disaster recovery for that box would therefore be:
> a) New machine with O/S and pg installed.
> b) Mount SAN
> c) Start PG. Everything (including logs) is available to you.
>
> It is, in essence, our "externally-stored PG data" in its entirety.
>
> On the 10k vs 15k rpm disks, there's a _lot_ to be said about that. I don't want to start a flame war here,
> but 15k versus 10k rpm hard drives does NOT equivocate to a 50% increase in read/write times, to say
> the VERY least.
>
> "Average seek time" is the time it takes for the head to move from random place A to
> random place B on the drive. The rotational latency of a drive can be easily calculated.
> A 15k drive rotates roughly 250 times per second, or 4 msec per rotation versus a 10k
> drive which is about 167 rotations per sec or 6 sec per rotation.
>
> This would mean that the rotational latency of a 15k drive adds 2msec and a 10k drive adds 3msec.
>
> So, your true seek time is the "average seek time" of the drive + the rotation listed above.
>
> So, if your average latency is something REALLY good (say 4msec) for each of the drives, your 15k
> drive would have 6msec real-world IOPS of around 166, and your 10k drive would have 143. In that
> particular case, at a very low level, you'd be getting about a 14% improvement.
>
> HOWEVER, we're not talking about a single drive, here. We're talking about a RAID10 of 12
> drives (6 + 6 mirror, I assume) versus 24 drives (12+12 mirror I assume). In that case,
> the max IOPS of your first RAID would be around 1000 while the max IOPS of your second RAID
> with the "slower" drives would be around 1700.
>
>
> Hope this helps.
>
> I _really_ don't want to start a war with this. If you're confused how I got these
> numbers, please contact me directly.

No argument from me.  My point was that the RAID-10 is for random
access at x op/s while the WAL is for sequential IO at x mb/s.  If you
put the pg_xlog on the same drives as the data set, then you basically
halve throughput by writing everything randomly twice.  Better RAID
controllers with lots of cache can re-order all that so that you still
get good throughput.  If you've got a RAID controller with less
memory, then it can really help to have a separate pg_xlog.  My
machines about 30% or so faster than with it all on one set with an
areca 1680.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux