Axel Thimm wrote: >> It's been a while since I used GFS, and it was GFS1 on RHAS3 or >> maybe 4. At that time, GFS performance was poor wrt ext3 even when >> the storage was locally attached to a single server. But what it >> did was so useful for an HA cluster that you would excuse it for >> not being also fast. > > ... aren't you doing it again? On one post you assume GFS being as > fast as any local fs, only to admit that it isn't. Yes, I look confused but actually I'm not. Or so I believe. The slow performance I'm talking about is again something you'd measure by running "find . >/dev/null", maybe twice. Issuing thousands of small queries makes most network filesystems, and the old GFS1, crawl. That's probably because these filesystems can't cache metadata at the VFS layer and must go through the lower layers to answer. If you think this access pattern is uncommon, consider that git, svn, cvs, and even make are designed around the assumption that stat'ting is cheap. When it comes to read() and write() in big chunks -- which is what you do to access mlocate.db -- I'd expect any half-decent filesystem to deliver almost the same raw performance of its underlying media. > Anyway, seems at the end we do agree ;) Yep :) > Even so, what does the poor fellow with a laptop and NFS3 do? Which is > a very common setup? A local cache would be needed in this case. > In this case the caching is rather trivial, since it is just a copy > operation and checking sizes & mtime. It can be made _perfect_ by > adding a checksum at the beginning or end of the db. Yes, I wasn't considering the whole picture: mlocate.db already *is* a cache. Caching a cache is trivial :-) -- // Bernardo Innocenti - Develer R&D dept. \X/ http://www.develer.com/ -- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list