On Wed, Sep 19, 2012 at 2:12 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote: > On Wed, Sep 19, 2012 at 2:05 PM, Tren Blackburn <tren@xxxxxxxxxxxxxxx> wrote: > >> Greg: It's difficult to tell you that. I'm rsyncing 2 volumes from our >> filers. Each base directory on each filer mount has approximate 213 >> directories, and then each directory under that has approximately >> anywhere from 3000 - 5000 directories (very loose approximation here, >> 850,000 directories per filer mount), and then each of those >> directories contains files. > > Ah, directories are larger — Sage, do you think they're enough bigger > to make up that much extra memory usage? > > >> We have many many files here. We're doing this to see how CephFS >> handles lots of files. We are coming from MooseFS which its master >> metalogger process eats lots of ram, so we're hoping that Ceph is a >> bit lighter on us. >> >> Sage: The memory the MDS is using is only a cache? There should be no >> problem restarting the MDS server while activity is going on? I should >> probably change the limit for the non-active MDS servers first, and >> then the active one and hope it fails over cleanly? > Yep, that should work fine, with the obvious caveat that your > filesystem will become inaccessible if the MDS is down long enough for > clients to exceed their timeouts (no metadata loss though, if all > clients remain active until the MDS comes back up). I have 3 MDS's (active/standby setup). Shouldn't the MDS fail over to the other node when I restart the process? I'm not sure what the best method for just restarting the MDS is, and can it be done without forcing a fail over? t. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html