On Tue, Jan 31, 2017 at 6:29 PM, Jorge Garcia <jgarcia@xxxxxxxxxxxx> wrote: > I'm running into a problem on a really large directory of over a million > files (don't ask, my users are clueless). Anyway, I'm trying to to use Ceph > as backup storage for their filesystem. As I rsync the directory, it started > giving me a "No space left on device" for this directory, even though the > ceph filesystem is at 66%, and no individual OSD is fuller than 82%. If I go > to the directory and try to do a "touch foo", it gives me the same "No space > left on device", but if I go to the parent directory and try to copy a file > there, it is fine. So I must be running into some per-directory limit. Any > ideas of what I can do to fix this problem? This is Ceph 10.2.5. Your choices are: A) Lift the limit on individual dirfrags (mds_bal_fragment_size_max), this may help if you only need a little more slack, but it is there for a reason and if you set it way higher you will risk hitting painful issues with oversized writes and reads to OSDs. It would be wise to do your own experiments on a separate filesystem to see how far you can push it. B) Enable directory fragmentation, while we aren't switching it on by default until Luminous it has historically not been very buggy. C) Put the monster directory into e.g. a local filesystem on an RBD volume for now, and move it back into CephFS when directory fragmentation is officially non-experimental. John > > Thanks! > > Jorge > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com