Re: Full OSD with 29% free

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The filesystem isn't as full now, but the fragmentation is pretty low:

[root@den2ceph001 ~]# df /dev/sdc1
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sdc1            486562672 270845628 215717044  56% /var/lib/ceph/osd/ceph-1
[root@den2ceph001 ~]# xfs_db -c frag -r /dev/sdc1
actual 3481543, ideal 3447443, fragmentation factor 0.98%

Bryan

On Mon, Oct 14, 2013 at 4:35 PM, Michael Lowe <j.michael.lowe@xxxxxxxxx> wrote:
>
> How fragmented is that file system?
>
> Sent from my iPad
>
> > On Oct 14, 2013, at 5:44 PM, Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx> wrote:
> >
> > This appears to be more of an XFS issue than a ceph issue, but I've
> > run into a problem where some of my OSDs failed because the filesystem
> > was reported as full even though there was 29% free:
> >
> > [root@den2ceph001 ceph-1]# touch blah
> > touch: cannot touch `blah': No space left on device
> > [root@den2ceph001 ceph-1]# df .
> > Filesystem           1K-blocks      Used Available Use% Mounted on
> > /dev/sdc1            486562672 342139340 144423332  71% /var/lib/ceph/osd/ceph-1
> > [root@den2ceph001 ceph-1]# df -i .
> > Filesystem            Inodes   IUsed   IFree IUse% Mounted on
> > /dev/sdc1            60849984 4097408 56752576    7% /var/lib/ceph/osd/ceph-1
> > [root@den2ceph001 ceph-1]#
> >
> > I've tried remounting the filesystem with the inode64 option like a
> > few people recommended, but that didn't help (probably because it
> > doesn't appear to be running out of inodes).
> >
> > This happened while I was on vacation and I'm pretty sure it was
> > caused by another OSD failing on the same node.  I've been able to
> > recover from the situation by bringing the failed OSD back online, but
> > it's only a matter of time until I'll be running into this issue again
> > since my cluster is still being populated.
> >
> > Any ideas on things I can try the next time this happens?
> >
> > Thanks,
> > Bryan
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux