Re: Is there a way to cache a file system listing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Theodore Tso wrote:

On Wed, Jun 21, 2006 at 03:15:38PM -0400, Kevin Strong wrote:
Justin Piszcz wrote:
I have a filesystem that has roughly ~600,000 files on it and every time a new client rsyncs this tree (after 10-20 minutes when the cache has expired) then it takes 5-10 minutes to re-traverse the tree during a new rsync. Is there a way, other than running find /path every minute or so, to keep the listing in memory so the rsyncs would run much faster?
I second this. I am having the same exact issue right now. Any suggestions would be appreciated.

Are you sure the time is actually re-traversing the tree, and not
calculating the per-file checksums?  Running an rsync server does
present a big load on the server, but traditionally it's not been a
matter of whether or not the directories have been cached, but rather
the fact that rsync doesn't cache the per-file checksums, and has to
recalculate them all each time a new client connects.

						- Ted


For me, this is on the client side, not the server side. Sorry; should have clarified. It's definitely an issue with the directory tree falling out of the FS cache. Are you aware of a solution to this?

Kevin Strong
Criminal Information Network, Inc.

_______________________________________________

Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux