On Wed, Aug 31, 2016 at 2:30 PM, Goncalo Borges <goncalo.borges@xxxxxxxxxxxxx> wrote: > Here it goes: > > # xfs_info /var/lib/ceph/osd/ceph-78 > meta-data=/dev/sdu1 isize=2048 agcount=4, agsize=183107519 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=0 finobt=0 > data = bsize=4096 blocks=732430075, imaxpct=5 > = sunit=0 swidth=0 blks > naming =version 2 bsize=4096 ascii-ci=0 ftype=0 > log =internal bsize=4096 blocks=357631, version=2 > = sectsz=512 sunit=0 blks, lazy-count=1 > realtime =none extsz=4096 blocks=0, rtextents=0 > > > # xfs_info /var/lib/ceph/osd/ceph-49 > meta-data=/dev/sde1 isize=2048 agcount=4, agsize=183105343 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=0 finobt=0 > data = bsize=4096 blocks=732421371, imaxpct=5 > = sunit=0 swidth=0 blks > naming =version 2 bsize=4096 ascii-ci=0 ftype=0 > log =internal bsize=4096 blocks=357627, version=2 > = sectsz=512 sunit=0 blks, lazy-count=1 > realtime =none extsz=4096 blocks=0, rtextents=0 > > > # xfs_info /var/lib/ceph/osd/ceph-59 > meta-data=/dev/sdg1 isize=2048 agcount=4, agsize=183105343 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=0 finobt=0 > data = bsize=4096 blocks=732421371, imaxpct=5 > = sunit=0 swidth=0 blks > naming =version 2 bsize=4096 ascii-ci=0 ftype=0 > log =internal bsize=4096 blocks=357627, version=2 > = sectsz=512 sunit=0 blks, lazy-count=1 > realtime =none extsz=4096 blocks=0, rtextents=0 OK, all look pretty similar so there goes that theory ;) I thought if one or more of the filesystems had a smaller isize they would not be able to store as many extended attributes and these would spill over into omap storage only on those OSDs. It's not that easy but it might be something similar given the ERANGE errors. I've assigned the tracker (thanks) to myself and will follow through on it. Please give me a little time to look further into the ERANGE errors and the logs you provided (thanks again) and I'll update here and the tracker when I know more. -- Cheers, Brad _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com