Re: pages stuck unclean (but remapped)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I would prob. start by figuring out exactly what pg are stuck unclean.

You can do 'ceph pg dump | grep unclean' to get that info - then if your theory holds you should be able to verify the disk(s) in question.

I cannot see any _too_full so am curious what could be the cause.

You can also always adjust the weights manually if needed ( http://ceph.com/docs/master/rados/operations/control/#osd-subsystem ) with the (re)weight cmd.

Cheers,
Martin


On Mon, Feb 24, 2014 at 2:09 AM, Gautam Saxena <gsaxena@xxxxxxxxxxx> wrote:
I have 19 pages that are stuck unclean (see below result of ceph -s). This occurred after I executed a "ceph osd reweight-by-utilization 108" to resolve problems with "backfill_too_full" messages, which I believe occurred because my OSDs sizes vary significantly in size (from a low of 600GB to a high of 3 TB). How can I get ceph to get these pages out of stuck-unclean? (And why is this occurring anyways?) My best guess of how to fix (though I don't know why) is that I need to run: 

ceph osd crush tunables optimal.

However, my kernel version (on a fully up-to-date Centos 6.5) is 2.6.32, which is well below the minimum required version of 3.6 that's stated in the documentation (http://ceph.com/docs/master/rados/operations/crush-map/-- so if I must run "ceph osd crush tunables optimal" to fix this problem, I presume I must upgrade my kernel first, right?...Any thoughts or am I chasing the wrong solution -- I want to avoid kernel upgrade unless it's needed.)

=====================

[root@ia2 ceph4]# ceph -s
    cluster 14f78538-6085-43f9-ac80-e886ca4de119
     health HEALTH_WARN 19 pgs backfilling; 19 pgs stuck unclean; recovery 42959/5511127 objects degraded (0.779%)
     monmap e9: 3 mons at {ia1=192.168.1.11:6789/0,ia2=192.168.1.12:6789/0,ia3=192.168.1.13:6789/0}, election epoch 496, quorum 0,1,2 ia1,ia2,ia3
     osdmap e7931: 23 osds: 23 up, 23 in
      pgmap v1904820: 1500 pgs, 1 pools, 10531 GB data, 2670 kobjects
            18708 GB used, 26758 GB / 45467 GB avail
            42959/5511127 objects degraded (0.779%)
                1481 active+clean
                  19 active+remapped+backfilling
  client io 1457 B/s wr, 0 op/s

[root@ia2 ceph4]# ceph -v
ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)

[root@ia2 ceph4]# uname -r
2.6.32-431.3.1.el6.x86_64

====

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux