How fragmented is that file system? Sent from my iPad > On Oct 14, 2013, at 5:44 PM, Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx> wrote: > > This appears to be more of an XFS issue than a ceph issue, but I've > run into a problem where some of my OSDs failed because the filesystem > was reported as full even though there was 29% free: > > [root@den2ceph001 ceph-1]# touch blah > touch: cannot touch `blah': No space left on device > [root@den2ceph001 ceph-1]# df . > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/sdc1 486562672 342139340 144423332 71% /var/lib/ceph/osd/ceph-1 > [root@den2ceph001 ceph-1]# df -i . > Filesystem Inodes IUsed IFree IUse% Mounted on > /dev/sdc1 60849984 4097408 56752576 7% /var/lib/ceph/osd/ceph-1 > [root@den2ceph001 ceph-1]# > > I've tried remounting the filesystem with the inode64 option like a > few people recommended, but that didn't help (probably because it > doesn't appear to be running out of inodes). > > This happened while I was on vacation and I'm pretty sure it was > caused by another OSD failing on the same node. I've been able to > recover from the situation by bringing the failed OSD back online, but > it's only a matter of time until I'll be running into this issue again > since my cluster is still being populated. > > Any ideas on things I can try the next time this happens? > > Thanks, > Bryan > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com