Re: Tablespaces and NFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Thanks again, Peter, for expanding on these points.

Peter Koczan wrote:
Anyway...  One detail I don't understand --- why do you claim that
"You can't take advantage of the shared file system because you can't
share tablespaces among clusters or servers" ???

I say that because you can't set up two servers to point to the same
tablespace

My bad! Definitely --- I was only looking at it through the point of view of my current problem at hand, so I misinterpreted what you said; it is clear and unambiguous, and I agree that there is little debate about it; in my mind, since I'm talking about *one* postgres server spreading its storage across several filesystems, I didn't understand why you seemed to be claiming that that can
not be combined with tablespaces ...

I know this doesn't fully apply to you, but I thought I should explain
my points betters since you asked so nicely :-)

:-)   It's appreaciated!

If you get decently fast disks, or put some slower disks in RAID 10,
you'll easily get >100 MB/sec (and that's a conservative estimate).
Even with a Gbit network, you'll get, in theory 128 MB/sec, and that's
assuming that the NFS'd disks aren't a bottleneck.

But still, with 128MB/sec  (modulo some possible NFS bottlenecks), I would
be a bit more optimistic, and would actually be tempted to retry your experiment with my setup. After all, with the setup that we have *today*, I don't think I get a sustained transfer rate above 80 or 90MB/sec from the hard drives (as
far as I know, they're plain vanilla Enterpreise-Grade SATA2 servers, which
I believe don't get further than 90MB/sec S.T.R.)

I sadly don't know enough networking to tell you tell the client
software "no really, I'm over here." However, one of the things I'm
fond of is using a module to store connection strings, and dynamically
loading said module on the client side. For instance, with Perl I
use...

use DBI;
use DBD::Pg;
use My::DBs;

my $dbh = DBI->connect($My::DBs::mydb);

Assuming that the module and its entries are kept up to date, it will
"just work." That way, there's only 1 module to change instead of n
client apps.

Oh no, but the problem we'd have would be at the level of the database design and access --- for instance, some of the tables that I think are bottlenecking (the ones I would like to spread with tablespaces) are quite interconnected to each other --- foreign keys come and go; on the client applications, many transaction blocks include several of those tables --- if I were to spread those tables across
several backends, I'm not sure the changes would be easy  :-( )

I can have a new server with a new name up without
changing any client code.

But then, you're talking about replicating data so that multiple client-apps can pick one out the several available "quasi-read-only" servers, I'm guessing?

Anyway, I'll keep working on alternative solutions --- I think
I have enough evidence to close this NFS door.

That's probably for the best.
Yep --- still closing that door!! The points I'm arguing in this message is just in the spirit of discussing and better understanding the issue. I'm still
convinced with your evidence.

Thanks,

Carlos
--




---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux