Re: XFS no space left on device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html/Allocation_Groups.html 
"Each AG can be up to one terabyte in size (512 bytes * 2^31), regardless of the underlying device's sector size."
"The only global information maintained by the first AG (primary) is free space across the filesystem and total inode counts"

Checkout this link: http://osvault.blogspot.de/2011/03/fixing-1tbyte-inode-problem-in-xfs-file.html , maybe your first AG (0) is full.

Regards,
Igor.

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> ??????? ???????
> Sent: Tuesday, October 25, 2016 2:52 PM
> To: ceph-users
> Subject: Re:  XFS no space left on device
> 
> This is a a bit more information about that XFS:
> 
> root@ed-ds-c178:[~]:$ xfs_info /dev/mapper/disk23p1
> meta-data=/dev/mapper/disk23p1   isize=2048   agcount=6,
> agsize=268435455 blks
>          =                       sectsz=4096  attr=2, projid32bit=1
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=1465130385, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=521728, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> root@ed-ds-c178:[~]:$ xfs_db /dev/mapper/disk23p1 xfs_db> frag actual
> 25205642, ideal 22794438, fragmentation factor 9.57%
> 
> 2016-10-25 14:59 GMT+03:00 Василий Ангапов <angapov@xxxxxxxxx>:
> > Actually all OSDs are already mounted with inode64 option. Otherwise I
> > could not write beyond 1TB.
> >
> > 2016-10-25 14:53 GMT+03:00 Ashley Merrick <ashley@xxxxxxxxxxxxxx>:
> >> Sounds like 32bit Inode limit, if you mount with -o inode64 (not 100% how
> you would do in ceph), would allow data to continue to be wrote.
> >>
> >> ,Ashley
> >>
> >> -----Original Message-----
> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> Of ??????? ???????
> >> Sent: 25 October 2016 12:38
> >> To: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> >> Subject:  XFS no space left on device
> >>
> >> Hello,
> >>
> >> I got Ceph 10.2.1 cluster with 10 nodes, each having 29 * 6TB OSDs.
> >> Yesterday I found that 3 OSDs were down and out with 89% space
> utilization.
> >> In logs there is:
> >> 2016-10-24 22:36:37.599253 7f8309c5e800  0 ceph version 10.2.1
> >> (3a66dd4f30852819c1bdaa8ec23c795d4ad77269), process ceph-osd, pid
> >> 2602081
> >> 2016-10-24 22:36:37.600129 7f8309c5e800  0 pidfile_write: ignore
> >> empty --pid-file
> >> 2016-10-24 22:36:37.635769 7f8309c5e800  0
> >> filestore(/var/lib/ceph/osd/ceph-123) backend xfs (magic 0x58465342)
> >> 2016-10-24 22:36:37.635805 7f8309c5e800 -1
> >> genericfilestorebackend(/var/lib/ceph/osd/ceph-123) detect_features:
> >> unable to create /var/lib/ceph/osd/ceph-123/fiemap_test: (28) No
> >> space left on device
> >> 2016-10-24 22:36:37.635814 7f8309c5e800 -1
> >> filestore(/var/lib/ceph/osd/ceph-123) _detect_fs: detect_features
> >> error: (28) No space left on device
> >> 2016-10-24 22:36:37.635818 7f8309c5e800 -1
> >> filestore(/var/lib/ceph/osd/ceph-123) FileStore::mount: error in
> >> _detect_fs: (28) No space left on device
> >> 2016-10-24 22:36:37.635824 7f8309c5e800 -1 osd.123 0 OSD:init: unable
> >> to mount object store
> >> 2016-10-24 22:36:37.635827 7f8309c5e800 -1 ESC[0;31m ** ERROR: osd
> >> init failed: (28) No space left on deviceESC[0m
> >>
> >> root@ed-ds-c178:[/var/lib/ceph/osd/ceph-123]:$ df -h
> /var/lib/ceph/osd/ceph-123
> >> Filesystem            Size  Used Avail Use% Mounted on
> >> /dev/mapper/disk23p1  5.5T  4.9T  651G  89%
> >> /var/lib/ceph/osd/ceph-123
> >>
> >> root@ed-ds-c178:[/var/lib/ceph/osd/ceph-123]:$ df -i
> /var/lib/ceph/osd/ceph-123
> >> Filesystem              Inodes    IUsed     IFree IUse% Mounted on
> >> /dev/mapper/disk23p1 146513024 22074752 124438272   16%
> >> /var/lib/ceph/osd/ceph-123
> >>
> >> root@ed-ds-c178:[/var/lib/ceph/osd/ceph-123]:$ touch 123
> >> touch: cannot touch ‘123’: No space left on device
> >>
> >> root@ed-ds-c178:[/var/lib/ceph/osd/ceph-123]:$ grep ceph-123
> >> /proc/mounts
> >> /dev/mapper/disk23p1 /var/lib/ceph/osd/ceph-123 xfs
> >> rw,noatime,attr2,inode64,noquota 0 0
> >>
> >> The same situation is for all three down OSDs. OSD can be unmounted
> and mounted without problem:
> >> root@ed-ds-c178:[~]:$ umount /var/lib/ceph/osd/ceph-123
> >> root@ed-ds-c178:[~]:$ root@ed-ds-c178:[~]:$ mount
> >> /var/lib/ceph/osd/ceph-123 root@ed-ds-c178:[~]:$ touch
> >> /var/lib/ceph/osd/ceph-123/123
> >> touch: cannot touch ‘/var/lib/ceph/osd/ceph-123/123’: No space left
> >> on device
> >>
> >> xfs_repair gives no error for FS.
> >>
> >> Kernel is
> >> root@ed-ds-c178:[~]:$ uname -r
> >> 4.7.0-1.el7.wg.x86_64
> >>
> >> What else can I do to rectify that situation?
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux