Hi, On Tue, 2009-01-13 at 18:44 -0200, Ramiro Blanco wrote: > > > 2009/1/13 Steven Whitehouse <swhiteho@xxxxxxxxxx> > <snip> > > > It looks like you won't really need to do a lot of tuning, it > should be > ok on defaults. The only issue is how often the various > processes > running on different nodes try to access the same data files. > Provided > its not too often, then everything should be fine, > > > Which kind of tunning would require in case of very often access to > same data? > Cheers, > Ideally you want to arrange the application so that you are not pushing the cache from node to node too often. So it depends on the application rather than the filesystem. The classic example is running a mail server with lots of small files in the same directory, and the solution is to have a number of separate directories. The issue in that case is that creating and deleting files requires exclusive access to the directory in which the files are being created and deleted and thus the application has to lay out its files such that all the nodes are not all trying to do that in just one single directory at once. It can make a huge difference to performance, and its not something which can really be fixed at a filesystem level, Steve. > -- > Ramiro Blanco > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster