Re: Sudden increase in "objects misplaced"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

Firstly, many thanks for your advice.

I'm perplexed as to why the crush map is upset; the host names look the
same, each node has a fixed IP on a single bond0 interface.

Perhaps the problems were an artefact of having "nodown" set?

As you suggested, I've unset "osd nodown" and am letting the cluster
rebalance. It looks like it's moving in the right direction, so
hopefully the problem will resolve...

    osd: 454 osds: 453 up, 453 in; 287 remapped pgs

data:
    pools:   3 pools, 8224 pgs
    objects: 485M objects, 1402 TB
    usage:   1481 TB used, 1788 TB / 3270 TB avail
    pgs:     3145/5089441557 objects degraded (0.000%)
             19379209/5089441557 objects misplaced (0.381%)
             7870 active+clean
             238  active+remapped+backfilling
             66   active+recovery_wait+degraded
             49   active+remapped+backfill_wait
             1    active+clean+snaptrim

  io:
    client:   101 MB/s wr, 0 op/s rd, 28
    recovery: 2806 MB/s, 975 objects/s

again, many thanks,

Jake

On 31/05/18 21:52, Gregory Farnum wrote:
> On Thu, May 31, 2018 at 5:07 AM Jake Grimmett <jog@xxxxxxxxxxxxxxxxx
> <mailto:jog@xxxxxxxxxxxxxxxxx>> wrote:
> 
>     Dear All,
> 
>     I recently upgraded our Ceph cluster from 12.2.4 to 12.2.5
>     & simultaneously upgraded the OS from Scientific Linux 7.4 to 7.5
> 
>     After reboot, 0.7% objects were misplaced and many pgs degraded.
> 
>     the cluster had no client connections, so I speeded up recovery with:
> 
>     ceph tell 'osd.*' injectargs '--osd-max-backfills 16'
> 
>     cluster then rebalances at >6000 MB/s, but the number of misplaced
>     objects started shooting up...
> 
> 
> Clearly something happened here. I'd probably try to understand that first.
> (Perhaps your host names changed and it swapped the CRUSH mappings?)
>  
> 
>      
> 
> 
>     In case something very nasty was going on, I set osd nodown, and
>     rebooted the cluster.
> 
> 
> This is probably not great. If you set nodown you're limiting the
> ability of the cluster to heal itself. Without understanding *why* it's
> trying to heal to begin with, you are in bad shape. Plus you may have
> OSD daemons dead and missing PGs that you just don't know about, because
> there's nobody around to report that they're dead. (Though you *may* be
> okay since the manager should notice if PG states aren't being reported
> and mark them stale.)
>  
> 
> 
>     21st May, Post reboot health status;
> 
>        pgs:     10003755/5184299696 <tel:(518)%20429-9696> objects
>     degraded (0.193%)
>                  282514666/5184299696 <tel:(518)%20429-9696> objects
>     misplaced (5.449%)
>      recovery: 1901 MB/s, 657 objects/s
> 
>     The cluster continued to mend, slowly this time (default
>     osd-max-backfills)
> 
>     28th May
>     nodown flag(s) set;
>     24820486/5352446983 objects misplaced (0.464%)
>     Degraded data redundancy: 816609/5352446983 objects degraded (0.015%),
>     179 pgs degraded, 6 pgs undersized
> 
>     30th May
>     nodown flag(s) set;
>     3571105/5667392354 <tel:(566)%20739-2354> objects misplaced (0.063%);
>     Degraded data redundancy: 40/5667392354 <tel:(566)%20739-2354>
>     objects degraded (0.000%)
>     1 pg degraded
> 
>     All good, so I thought, but this morning (31st May):
> 
>     nodown flag(s) set;
>     41264874/5190843723 objects misplaced (0.795%)
>     Degraded data redundancy: 11795/5190843723 objects degraded (0.000%),
>     226 pgs degraded
> 
>     Of course I'm perplexed as to what might have caused this...
> 
>     Looking at /var/log/ceph.log-20180531.gz
> 
>     there is a sudden jump in objects misplaced at 22:55:28
> 
> 
>     2018-05-30 22:55:18.154529 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 71418 :
>     cluster [WRN] Health check update: 2666818/5085379079
>     <tel:(508)%20537-9079> objects misplaced
>     (0.052%) (OBJECT_MISPLACED)
>     2018-05-30 22:55:20.096386 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 72319 :
>     cluster [WRN] Health check failed: Reduced data availability: 34 pgs
>     peering (PG_AVAILABILITY)
>     2018-05-30 22:55:22.197206 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 72333 :
>     cluster [WRN] Health check failed: Degraded data redundancy:
>     1123/5079163159 <tel:(507)%20916-3159> objects degraded (0.000%), 21
>     pgs degraded (PG_DEGRADED)
>     2018-05-30 22:55:23.155873 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 72335 :
>     cluster [WRN] Health check update: 2666363/5079163159
>     <tel:(507)%20916-3159> objects misplaced
>     (0.052%) (OBJECT_MISPLACED)
>     2018-05-30 22:55:25.450185 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 72336 :
>     cluster [WRN] Health check update: Reduced data availability: 2 pgs
>     inactive, 38 pgs peering (PG_AVAILABILITY)
>     2018-05-30 22:55:27.521142 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 72337 :
>     cluster [WRN] Health check update: Degraded data redundancy:
>     13808/5085377819 <tel:(508)%20537-7819> objects degraded (0.000%),
>     270 pgs degraded (PG_DEGRADED)
>     2018-05-30 22:55:27.521181 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 72338 :
>     cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data
>     availability: 2 pgs inactive, 38 pgs peering)
>     2018-05-30 22:55:28.157397 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 72339 :
>     cluster [WRN] Health check update: 54749389/5085377819
>     <tel:(508)%20537-7819> objects misplaced
>     (1.077%) (OBJECT_MISPLACED)
>     2018-05-30 22:55:33.158644 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 72340 :
>     cluster [WRN] Health check update: 54748082/5085377079
>     <tel:(508)%20537-7079> objects misplaced
>     (1.077%) (OBJECT_MISPLACED)
>     2018-05-30 22:55:33.158698 mon.ceph2 mon.0 10.1.0.80:6789/0
>     <http://10.1.0.80:6789/0> 72341 :
>     cluster [WRN] Health check update: Degraded data redundancy:
>     13600/5085377079 <tel:(508)%20537-7079> objects degraded (0.000%),
>     265 pgs degraded (PG_DEGRADED)
> 
>     ceph-mgr.ceph1.log-20180531.gz has 455 identical entries logged in two
>     seconds (not seen elsewhere in the mgr log)
> 
>     2018-05-30 22:55:18.839956 7f9cd8590700  1 mgr[restful] Unknown
>     request ''
> 
>     ceph-mon.ceph1.log-20180531.gz entries at this time are as follows:
> 
>     2018-05-30 22:55:17.487698 7f3fef608700  4 rocksdb: EVENT_LOG_v1
>     {"time_micros": 1527717317487691, "job": 6580, "event":
>     "table_file_deletion", "file_number": 801008}
> 
>     (snip - 12 similar lines)
> 
>     2018-05-30 22:55:17.637385 7f3fef608700  4 rocksdb: EVENT_LOG_v1
>     {"time_micros": 1527717317637379, "job": 6580, "event":
>     "table_file_deletion", "file_number": 800994}
> 
>     2018-05-30 22:55:18.817637 7f3ff5614700  1 mon.ceph1@1(peon).osd e161984
>     e161984: 454 total, 453 up, 453 in
>     2018-05-30 22:55:19.121033 7f3ff7618700  1 'Monitor::cpu_tp thread
>     0x7f3ff7618700' had timed out after 0
>     2018-05-30 22:55:19.158228 7f3ff5e15700  1 heartbeat_map reset_timeout
>     'Monitor::cpu_tp thread 0x7f3ff5e15700' had timed out after 0
>     2018-05-30 22:55:20.168560 7f3ff5614700  1 mon.ceph1@1(peon).osd e161985
>     e161985: 454 total, 453 up, 453 in
> 
>     Other info?
>     # ceph balancer status
>     {
>         "active": true,
>         "plans": [],
>         "mode": "crush-compat"
>     }
> 
>     One dead osd, down and empty, waiting to be replaced - presumably this
>     is not complicating matters?
>     # ceph osd df | grep 423
>     423   hdd 7.27730        0     0      0     0     0    0   0
> 
> 
> 
>     Many thanks for any advice,
> 
>     Jake
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux