Re: how to fix active+remapped pg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ugis,

Can you provide the results for:

ceph osd tree
ceph osd crush dump






On Thu, Nov 21, 2013 at 7:59 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
> On Thu, Nov 21, 2013 at 7:52 AM, Ugis <ugis22@xxxxxxxxx> wrote:
>> Thanks, reread that section in docs and found tunables profile - nice
>> to have, hadn't noticed it before(ceph docs develop so fast that you
>> need RSS to follow all changes :) )
>>
>> Still problem persists in a different way.
>> Did set profile "optimal", reballancing started, but I had "rbd
>> delete" in background, in the end cluster ended up with negative
>> degradation %
>> I think I have hit bug http://tracker.ceph.com/issues/3720   which is
>> still open.
>> I did restart osds one by one and negative degradation dissapeared.
>>
>> Afterwards I added extra ~900GB data, degradation growed in process to 0.071%
>> This is rather http://tracker.ceph.com/issues/3747  which is closed,
>> but seems to happen still.
>> I did "ceph osd out X; sleep 40; ceph osd in X" for all osds,
>> degradation % went away.
>>
>> In the end I still have "55 active+remapped" pgs and no degradation %.
>> "pgmap v1853405: 2662 pgs: 2607 active+clean, 55 active+remapped; 5361
>> GB data, 10743 GB used, 10852 GB / 21595 GB avail; 25230KB/s rd,
>> 203op/s"
>>
>> I queried some of remapped pgs, do not see why they do not
>> reballance(tunables are optimal now, checked).
>>
>> Where to look for the reason they are not reballancing? Is there
>> something to look for in osd logs if debug level is increased?
>>
>> one of those:
>> # ceph pg 4.5e query
>> { "state": "active+remapped",
>>   "epoch": 9165,
>>   "up": [
>>         9],
>>   "acting": [
>>         9,
>>         5],
>
> For some reason CRUSH is still failing to map all the PGs to two hosts
> (notice how the "up" set is only one OSD, so it's adding another one
> in "acting") — what's your CRUSH map look like?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux