Re: cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have updated the tracker with some log extracts as I seem to be hitting this or a very similar issue.

I was unsure of the correct syntax for the command ceph-objectstore-tool to try and extract that information. 

On Wed, Aug 31, 2016 at 5:56 AM, Brad Hubbard <bhubbard@xxxxxxxxxx> wrote:

On Wed, Aug 31, 2016 at 2:30 PM, Goncalo Borges <goncalo.borges@xxxxxxxxxxxxx> wrote:
> Here it goes:
>
> # xfs_info /var/lib/ceph/osd/ceph-78
> meta-data=""             isize=2048   agcount=4, agsize=183107519 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=732430075, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=357631, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
>
> # xfs_info /var/lib/ceph/osd/ceph-49
> meta-data=""             isize=2048   agcount=4, agsize=183105343 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=732421371, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=357627, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
>
> # xfs_info /var/lib/ceph/osd/ceph-59
> meta-data=""             isize=2048   agcount=4, agsize=183105343 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=732421371, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=357627, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0

OK, all look pretty similar so there goes that theory ;)

I thought if one or more of the filesystems had a smaller isize they would not
be able to store as many extended attributes and these would spill over into
omap storage only on those OSDs. It's not that easy but it might be something
similar given the ERANGE errors.

I've assigned the tracker (thanks) to myself and will follow through on it.
Please give me a little time to look further into the ERANGE errors and the logs
you provided (thanks again) and I'll update here and the tracker when I know
more.

--
Cheers,
Brad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux