Re: ceph PGs issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You have incomplete PGs, which means you have inactive data, because the data isn't there.

This will typically only happen when you have multiple concurrent disk failures, or something like that, so I think there is some missing info.

>            1 osds exist in the crush map but not in the osdmap

This seems like a red flag to have an OSD in the crush map but not the osdmap.

>            mons xyz01,xyz02 are low on available space

Your mons are probably filling up data running in the warn state.
This can be problematic for recovery.

I think you will be more likely to receive some useful suggestions by providing things like which version of ceph you are using ($ ceph -v), major events that caused this, poo ($ ceph osd pool ls detail) and osd  ($ ceph osd tree) topology, as well as maybe detailed health output ($ ceph health detail).

Given how much data some things may be, like the osd tree, you may want to paste to pastebin and link here.

Reed

> On Jun 15, 2021, at 2:48 AM, Aly, Adel <adel.aly@xxxxxxxx> wrote:
> 
> Dears,
> 
> We have a ceph cluster with 4096 PGs out of with +100 PGs are not active+clean.
> 
> On top of the ceph cluster, we have a ceph FS, with 3 active MDS servers.
> 
> It seems that we can’t get all the files out of it because of the affected PGs.
> 
> The object store has more than 400 million objects.
> 
> When we do “rados -p cephfs-data ls”, the listing stops (hangs) after listing +11 million objects.
> 
> When we try to access an object which we can’t copy, the rados command hangs forever:
> 
> ls -I <filename>
> 2199140525188
> 
> printf "%x\n" 2199140525188
> 20006fd6484
> 
> rados -p cephfs-data stat 20006fd6484.00000000
> (hangs here)
> 
> This is the current status of the ceph cluster:
>    health: HEALTH_WARN
>            1 MDSs report slow metadata IOs
>            1 MDSs report slow requests
>            1 MDSs behind on trimming
>            1 osds exist in the crush map but not in the osdmap
>            *Reduced data availability: 22 pgs inactive, 22 pgs incomplete*
>            240324 slow ops, oldest one blocked for 391503 sec, daemons [osd.144,osd.159,osd.180,osd.184,osd.242,osd.271,osd.275,osd.278,osd.280,osd.332]... h
> ave slow ops.
>            mons xyz01,xyz02 are low on available space
> 
>  services:
>    mon: 4 daemons, quorum abc001,abc002,xyz02,xyz01
>    mgr: abc002(active), standbys: xyz01, xyz02, abc001
>    mds: cephfs-3/3/3 up  {0=xyz02=up:active,1=abc001=up:active,2=abc002=up:active}, 1 up:standby
>    osd: 421 osds: 421 up, 421 in; 7 remapped pgs
> 
>  data:
>    pools:   2 pools, 4096 pgs
>    objects: 403.4 M objects, 846 TiB
>    usage:   1.2 PiB used, 1.4 PiB / 2.6 PiB avail
>    pgs:     0.537% pgs not active
>             3968 active+clean
>             96   active+clean+scrubbing+deep+repair
>             15   incomplete
>             10   active+clean+scrubbing
>             7    remapped+incomplete
> 
>  io:
>    client:   89 KiB/s rd, 13 KiB/s wr, 34 op/s rd, 1 op/s wr
> 
> The 100+ PGs have been in this state for a long time already.
> 
> Sometimes when we try to copy some files the rsync process hangs and we can’t kill it and from the process stack, it seems to be hanging on ceph i/o operation.
> 
> # cat /proc/51795/stack
> [<ffffffffc184206d>] ceph_mdsc_do_request+0xfd/0x280 [ceph]
> [<ffffffffc181e92e>] __ceph_do_getattr+0x9e/0x200 [ceph]
> [<ffffffffc181eb08>] ceph_getattr+0x28/0x100 [ceph]
> [<ffffffffab853689>] vfs_getattr+0x49/0x80
> [<ffffffffab8537b5>] vfs_fstatat+0x75/0xc0
> [<ffffffffab853bc1>] SYSC_newlstat+0x31/0x60
> [<ffffffffab85402e>] SyS_newlstat+0xe/0x10
> [<ffffffffabd93f92>] system_call_fastpath+0x25/0x2a
> [<ffffffffffffffff>] 0xffffffffffffffff
> 
> # cat /proc/51795/mem
> cat: /proc/51795/mem: Input/output error
> 
> Any idea on how to move forward with debugging and fixing this issue so we can get the data out of the ceph FS?
> 
> Thank you in advance.
> 
> Kind regards,
> adel
> 
> This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, Atos’ liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted. On all offers and agreements under which Atos Nederland B.V. supplies goods and/or services of whatever nature, the Terms of Delivery from Atos Nederland B.V. exclusively apply. The Terms of Delivery shall be promptly submitted to you on your request.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux