On Wed, Sep 19, 2012 at 2:05 PM, Tren Blackburn <tren@xxxxxxxxxxxxxxx> wrote: > Greg: It's difficult to tell you that. I'm rsyncing 2 volumes from our > filers. Each base directory on each filer mount has approximate 213 > directories, and then each directory under that has approximately > anywhere from 3000 - 5000 directories (very loose approximation here, > 850,000 directories per filer mount), and then each of those > directories contains files. Ah, directories are larger — Sage, do you think they're enough bigger to make up that much extra memory usage? > We have many many files here. We're doing this to see how CephFS > handles lots of files. We are coming from MooseFS which its master > metalogger process eats lots of ram, so we're hoping that Ceph is a > bit lighter on us. > > Sage: The memory the MDS is using is only a cache? There should be no > problem restarting the MDS server while activity is going on? I should > probably change the limit for the non-active MDS servers first, and > then the active one and hope it fails over cleanly? Yep, that should work fine, with the obvious caveat that your filesystem will become inaccessible if the MDS is down long enough for clients to exceed their timeouts (no metadata loss though, if all clients remain active until the MDS comes back up). The cache needs to be large enough to hold any directory you happen to be working in (for now; there is unstable code that fixes that issue), so it should be pretty large, but all updates are on stable storage fairly quickly (before anybody except the requesting client is allowed to see the change — and the change is in-memory on both the requesting client and the MDS until that point). -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html