Fwd: pg inactive+remapped

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

/# ceph pg dump_stuck
PG_STAT  STATE                                    UP       UP_PRIMARY
ACTING  ACTING_PRIMARY
4.a3     activating+undersized+degraded+remapped  [7,0,8]           7
 [7,8]               7
ok

# ceph pg map 4.a3
osdmap e640 pg 4.a3 (4.a3) -> up [7,0,8] acting [7,8]

# ceph tell osd.7 config set debug_osd 20

But my log file for osd 7 is empty

Le mar. 16 nov. 2021 à 12:28, YiteGu <ess_gyt@xxxxxx> a écrit :

> 1. "ceph pg dump | grep activating " look up this pg id
> 2. "ceph pg map <pgid>" to query primary osd of pg
> 3. set primary osd debug_osd value to 20, send primary osd log file to me.
>
> ------------------------------
> YiteGu
> ess_gyt@xxxxxx
>
> <https://wx.mail.qq.com/home/index?t=readmail_businesscard_midpage&nocheck=true&name=YiteGu&icon=http%3A%2F%2Fthirdqq.qlogo.cn%2Fg%3Fb%3Dsdk%26k%3DAPWIz1RVgQfFuaL6I0L5Kg%26s%3D100%26t%3D1556548375%3Frand%3D1631210319&mail=ess_gyt%40qq.com&code=el8F6yK6_11_9bR2rcDdPDK8muTkKjRlGlGezC0jSKqFHqs-M1wnqZKhE9-f0pEHrTvudOv949EQDTkJ11_ChQ>
>
>
>
> ------------------ Original ------------------
> *From:* "Joffrey" <joff.au@xxxxxxxxx>;
> *Date:* Tue, Nov 16, 2021 07:16 PM
> *To:* "ceph-users"<ceph-users@xxxxxxx>;
> *Subject:*  pg inactive+remapped
>
> Hi,
>
> I don't understand why my Global Recovery Event never finish...
> I have 3 hosts, all osd and hosts are up. My pools are replica*3
>
> # ceph status
>   cluster:
>     id:     0a77af8a-414c-11ec-908a-005056b4f234
>     health: HEALTH_WARN
>             Reduced data availability: 1 pg inactive
>             Degraded data redundancy: 1/1512 objects degraded (0.066%), 1
> pg degraded, 1 pg undersized
>
>   services:
>     mon: 3 daemons, quorum
> preprod-ceph1-mon1,preprod-ceph1-mon3,preprod-ceph1-mon2 (age 2d)
>     mgr: preprod-ceph1-mon3.ssvflc(active, since 3d), standbys:
> preprod-ceph1-mon1.giaate, preprod-ceph1-mon2.yducxr
>     osd: 12 osds: 12 up (since 28m), 12 in (since 29m); 1 remapped pgs
>
>   data:
>     pools:   2 pools, 742 pgs
>     objects: 504 objects, 1.8 GiB
>     usage:   4.4 TiB used, 87 TiB / 92 TiB avail
>     pgs:     0.135% pgs not active
>              1/1512 objects degraded (0.066%)
>              741 active+clean
>              1   activating+undersized+degraded+remapped
>
>   io:
>     client:   0 B/s rd, 51 KiB/s wr, 2 op/s rd, 2 op/s wr
>
>   progress:
>     Global Recovery Event (39m)
>       [===========================.] (remaining: 3s)
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux