Re: Slow count(*) again...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 11 Oct 2010, Samuel Gendler wrote:

On Mon, Oct 11, 2010 at 9:06 PM, Scott Carey <scott@xxxxxxxxxxxxxxxxx>wrote:

I can't speak to documentation, but it is something that helps as your I/O
subsystem gets more powerful, and how much it helps depends more on your
hardware, which may have adaptive read ahead on its own, and your file
system which may be more or less efficient at sequential I/O.  For example
ext3 out of the box gets a much bigger gain from tuning read-ahead than XFS
on a DELL PERC6 RAID card (but still ends up slower).


Geez.  I wish someone would have written something quite so bold as 'xfs is
always faster than ext3' in the standard tuning docs.  I couldn't find
anything that made a strong filesystem recommendation.  How does xfs compare
to ext4?  I wound up on ext4 on a dell perc6 raid card when an unexpected
hardware failure on a production system caused my test system to get thrown
into production before I could do any serious testing of xfs.  If there is a
strong consensus that xfs is simply better, I could afford the downtime to
switch.

unfortunantly you are not going to get a clear opinion here.

ext3 has a long track record, and since it is the default, it gets a lot of testing. it does have known issues

xfs had problems on linux immediatly after it was ported. It continues to be improved and many people have been using it for years and trust it. XFS does have a weakness in creating/deleting large numbers of small files.

ext4 is the new kid on the block. it claims good things, but it's so new that many people don't trust it yet

btrfs is the 'future of filesystems' that is supposed to be better than anything else, but it's definantly not stable yet, and time will tell if it really lives up to it's promises.

an this is just on linux

on BSD or solaris (or with out-of-kernel patches) you also have ZFS, which some people swear by, and other people swear at.

David Lang


As it happens, this is a system where all of the heavy workload is in the
form of sequential scan type load. The OLTP workload is very minimal (tens
of queries per minute on a small number of small tables), but there are a
lot of reporting queries that wind up doing sequential scans of large
partitions (millions to tens of millions of rows).  We've sized the new
hardware so that the most commonly used partitions fit into memory, but if
we could speed the queries that touch less frequently used partitions, that
would be good.  I'm the closest thing our team has to a DBA, which really
only means that I'm the one person on the dev team or the ops team to have
read all of the postgres docs and wiki and the mailing lists.  I claim no
actual DBA experience or expertise and have limited cycles to devote to
tuning and testing, so if there is an established wisdom for filesystem
choice and read ahead tuning, I'd be very interested in hearing it.


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux