Re: optimising filesystem for many small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2009-10-18 at 21:59 +0530, Viji V Nair wrote:
> On Sun, Oct 18, 2009 at 8:37 PM, Jon Burgess <jburgess777@xxxxxxxxxxxxxx> wrote:
> > On Sun, 2009-10-18 at 18:44 +0530, Viji V Nair wrote:
> >> >
> >> >> can see lots of system resources are free, memory, processors etc
> >> >> (these are 4G, 2 x 5420 XEON)
> >
> > 4GB may be a little small. Have you checked whether the IO reading your
> > data sources is the bottleneck?
> 
> I will be upgrading the RAM, but I didn't see any swap usage while
> running this applications...
> the data source is on a different machine, postgres+postgis. I have
> checked the IO, looks fine. It is a 50G DB running on 16GB dual xeon
> box

Going into swap is not the issue. If you have extra RAM available then
the OS will use this as a disk cache which means the DB will be able to
access indexes etc without needing to wait for the disk every time. 16GB
of RAM for a 50GB DB should be sufficient if the data is sensibly
indexed.

> I have to give a try on mod_tile. Do you have any suggestion on using
> nginx/varnish as a cahce layer?

There have been some tests using squid as a cache in front of mod_tile.
This worked reasonably well but did not give a big performance increase
because the server was already able to handle the load without an
additional cache. If you want to discuss this further then I'd suggest
continuing the conversation on the OSM or Mapnik lists.

	Jon


_______________________________________________
Ext3-users mailing list
Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux