Re: cannot see recovery statistics + pgs stuck unclean

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[Hrm, this email was in my spam folder.]

At a quick glance, you're probably running into some issues because
you've got two racks of very different weights. Things will probably
get better if you enable the optimal "crush tunables"; check out the
docs on that and see if you can switch to them.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Fri, Dec 27, 2013 at 3:58 AM, Sreejith Vijayendran
<sreejith.vijayendran@xxxxxxxxxx> wrote:
> Hello,
>
> We have a 3 node cluster set up with OSDs created on all the 3 nodes. The
> replication was set to 2.
> 1>> We are in testing phase and tried to bring down all the OSDs in a
> particular node and was testing the migration of PGs to other OSDs.
> But the PGs are not getting replicated on other OSDs and the status of the
> replication also was not clear at all.
> Below is the ceph status at that point:
>
> ===============
> sreejith@sb1001:/var/run/ceph$ sudo ceph status
>     cluster 9b48b60c-bebe-4714-8a61-91ca5b388a17
>      health HEALTH_WARN 885 pgs degraded; 885 pgs stuck unclean; recovery
> 59/232 objects degraded (25.431%); 22/60 in osds are down
>      monmap e2: 3 mons at
> {sb1001=10.2.4.90:6789/0,sb1002=10.2.4.202:6789/0,sb1004=10.2.4.203:6789/0},
> election epoch 22, quorum 0,1,2 sb1001,sb1002,sb1004
>      osdmap e378: 68 osds: 38 up, 60 in
>       pgmap v4490: 1564 pgs, 25 pools, 1320 MB data, 116 objects
>             8415 MB used, 109 TB / 109 TB avail
>             59/232 objects degraded (25.431%)
>                  679 active+clean
>                  862 active+degraded
>                   23 active+degraded+remapped
> ===============
>
> We waited for around 4-5 hours and the status just increased marginally to
> (25.431%) from (27%) at the start.
>
> 2>> we then tweaked some OSD values to speed up the recovery
> namely(osd_recovery_threads, osd_recovery_max_active,
> osd_recovery_max_chunk, osd_max_backfills, osd_backfill_retry_interval etc)
> as we were only concerned about getting the OSDs rebalanced as of now. But
> this didnt improve at all all over the night till the morning.
>
> 3>> We then manually started all the OSDs in that specific node and the
> status came back up:
> But then we could see that there were 23PGs stuck unclean and in
> 'active+remapped state'
>
> =================
> sreejith@sb1001:~$ sudo ceph status
> [sudo] password for sreejith:
>     cluster 9b48b60c-bebe-4714-8a61-91ca5b388a17
>      health HEALTH_WARN 23 pgs stuck unclean
>      monmap e2: 3 mons at
> {sb1001=10.2.4.90:6789/0,sb1002=10.2.4.202:6789/0,sb1004=10.2.4.203:6789/0},
> election epoch 22, quorum 0,1,2 sb1001,sb1002,sb1004
>      osdmap e382: 68 osds: 61 up, 61 in
>       pgmap v4931: 1564 pgs, 25 pools, 1320 MB data, 116 objects
>             7931 MB used, 110 TB / 110 TB avail
>                 1541 active+clean
>                   23 active+remapped
> =================
>
> The 'pg dump_stuck unclean' was showing  that all the PGs were on 4 OSDs and
> no other PG were on those same OSDs.
> SO:
> 4>> we took those OSDs out of the cluster using 'ceph osd out {id}'. Then
> the unclean PG number increased to 52. Even after making the OSDs back 'IN',
> the situation didn't improve.
>
> =============
> root@sb1001:/home/sreejith# ceph health detail
> HEALTH_WARN 52 pgs stuck unclean
> pg 9.63 is stuck unclean since forever, current state active+remapped, last
> acting [47,7]
> pg 11.61 is stuck unclean since forever, current state active+remapped, last
> acting [47,7]
> pg 10.62 is stuck unclean since forever, current state active+remapped, last
> acting [47,7]
> pg 13.5f is stuck unclean since forever, current state active+remapped, last
> acting [47,7]
> pg 15.5d is stuck unclean since forever, current state active+remapped, last
> acting [47,7]
> pg 14.5e is stuck unclean since forever, current state active+remapped, last
> acting [47,7]
> pg 9.47 is stuck unclean for 530.594604, current state active+remapped, last
> acting [66,43]
> pg 7.49 is stuck unclean for 530.594593, current state active+remapped, last
> acting [66,43]
> pg 5.4b is stuck unclean for 530.594481, current state active+remapped, last
> acting [66,43]
> pg 3.4d is stuck unclean for 530.594449, current state active+remapped, last
> acting [66,43]
> pg 11.45 is stuck unclean for 530.594635, current state active+remapped,
> last acting [66,43]
> pg 13.43 is stuck unclean for 530.594654, current state active+remapped,
> last acting [66,43]
> pg 15.41 is stuck unclean for 530.594695, current state active+remapped,
> last acting [66,43]
> pg 6.4a is stuck unclean for 530.594366, current state active+remapped, last
> acting [66,43]
> pg 10.46 is stuck unclean for 530.594387, current state active+remapped,
> last acting [66,43]
> pg 14.42 is stuck unclean for 530.594422, current state active+remapped,
> last acting [66,43]
> pg 4.4c is stuck unclean for 530.594341, current state active+remapped, last
> acting [66,43]
> pg 12.44 is stuck unclean for 530.594361, current state active+remapped,
> last acting [66,43]
> pg 8.48 is stuck unclean for 530.594294, current state active+remapped, last
> acting [66,43]
> pg 8.30 is stuck unclean for 175428.682512, current state active+remapped,
> last acting [51,45]
> pg 0.38 is stuck unclean for 175428.682498, current state active+remapped,
> last acting [51,45]
> pg 7.31 is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 5.33 is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 3.35 is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 1.37 is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 6.32 is stuck unclean for 154614.490724, current state active+remapped,
> last acting [51,45]
> pg 2.36 is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 4.34 is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 9.24 is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 8.25 is stuck unclean for 482.144359, current state active+remapped, last
> acting [48,24]
> pg 5.28 is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 4.29 is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 12.21 is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 0.2d is stuck unclean for 482.144315, current state active+remapped, last
> acting [48,24]
> pg 1.2c is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 13.20 is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 3.2a is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 11.22 is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 10.23 is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 2.2b is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 9.2f is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 11.2d is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 13.2b is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 15.29 is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 10.2e is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 14.2a is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 7.26 is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 6.27 is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 12.2c is stuck unclean since forever, current state active+remapped, last
> acting [51,45]
> pg 14.1f is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 15.1e is stuck unclean since forever, current state active+remapped, last
> acting [48,24]
> pg 12.60 is stuck unclean for 175428.710580, current state active+remapped,
> last acting [47,7]
> root@sb1001:/home/sreejith#
> =============
>
> Now we could see that some other OSDs also came in the list of unclean PGs.
> And the number of unclean PGs just increased to 52
>
>
> So we have two questions:
> 1> why isn't the unclean PGs getting cleared, how to debug more.
> 2> how to check the recover/backfill status of OSDs.(ceph status, -w, all
> commands show same data). It just shows this many are up, in, this many PGs
> are clean, remapped. There is no status update of the progress if any.
> I referred this bug:
> http://tracker.ceph.com/issues/6736
> where the user could see status update like(peering, recovery etc), But for
> us these are not shown.
>
>
> More details:
> we have all our OSDs of 1.7TB size.
> we have 3 nodes in cluster (all 3 having one mon instance each and OSDs on
> all)
> replication set to default 2
> ceph version 0.72.2
>
> Attaching the crush rule set, osd dump, pg dump.
>
>
> --
>
> Regards,
> Sreejith
>
> _____________________________________________________________
> The information contained in this communication is intended solely for the
> use of the individual or entity to whom it is addressed and others
> authorized to receive it. It may contain confidential or legally privileged
> information. If you are not the intended recipient you are hereby notified
> that any disclosure, copying, distribution or taking any action in reliance
> on the contents of this information is strictly prohibited and may be
> unlawful. If you have received this communication in error, please notify us
> immediately by responding to this email and then delete it from your system.
> The firm is neither liable for the proper and complete transmission of the
> information contained in this communication nor for any delay in its
> receipt.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux