Re: cephfs degraded on ceph luminous 12.2.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
took quite some time to recover the pgs, and indeed the problem with the
mds instances was due to the activating pgs. Once they were cleared the
fs went back to the original state.
I had to restart a few times some OSds though, in order to get all the
pgs activated, and I didn't hit the limits on the max pgs, but I'm close
to, so I have set them to 300 just to be safe (AFAIK it was the limit
set to prior releases of ceph, not sure why it was lowered to 200 now).
Thanks,

	Alessandro

On Tue, 2018-01-09 at 09:01 +0100, Burkhard Linke wrote:
> Hi,
> 
> 
> On 01/08/2018 05:40 PM, Alessandro De Salvo wrote:
> > Thanks Lincoln,
> >
> > indeed, as I said the cluster is recovering, so there are pending ops:
> >
> >
> >     pgs:     21.034% pgs not active
> >              1692310/24980804 objects degraded (6.774%)
> >              5612149/24980804 objects misplaced (22.466%)
> >              458 active+clean
> >              329 active+remapped+backfill_wait
> >              159 activating+remapped
> >              100 active+undersized+degraded+remapped+backfill_wait
> >              58  activating+undersized+degraded+remapped
> >              27  activating
> >              22  active+undersized+degraded+remapped+backfilling
> >              6   active+remapped+backfilling
> >              1   active+recovery_wait+degraded
> >
> >
> > If it's just a matter to wait for the system to complete the recovery 
> > it's fine, I'll deal with that, but I was wondendering if there is a 
> > more suble problem here.
> >
> > OK, I'll wait for the recovery to complete and see what happens, thanks.
> 
> The blocked MDS might be caused by the 'activating' PGs. Do you have a 
> warning about too much PGs per OSD? If that is the case, 
> activating/creating/peering/whatever on the affected OSDs is blocked, 
> which leads to blocked requests etc.
> 
> You can resolve this be increasing the number of allowed PGs per OSD 
> ('mon_max_pg_per_osd'). AFAIK it needs to be set for mon, mgr and osd 
> instances. There was also been some discussion about this setting on the 
> mailing list in the last weeks.
> 
> Regards,
> Burkhard
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux