Re: log messages about inconsistent data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A follow-up question to this: After repairing those PGs, my cluster
seems to have come to rest in this state.

2011-01-25 00:26:58.447007    pg v130979: 270 pgs: 8 active, 262
active+clean; 822 GB data, 1762 GB used, 1265 GB / 3036 GB avail;
25/556114 degraded (0.004%)

I don't know whether or not it's in a functional state, since I'm
having MDS issues [1], so I can't actually mount it and poke around.
Still, should I be worried that those 8 PGs aren't being marked
'clean'?

[1] http://tracker.newdream.net/issues/733

On Mon, Jan 24, 2011 at 7:45 PM, Ravi Pinjala <pstatic@xxxxxxxxx> wrote:
> That seems to be working. Thanks!
>
> --Ravi
>
> On Mon, Jan 24, 2011 at 10:40 AM, Samuel Just <samuelj@xxxxxxxxxxxxxxx> wrote:
>> Âceph pg repair <pgid> should cause the osd to repair the
>> inconsistency in most cases. ÂYou can get the pgid by grepping
>> ceph pg dump for the inconsistent pg.
>> -Sam
>>
>> On 01/23/2011 11:18 PM, Ravi Pinjala wrote:
>>>
>>> Do I need to be worried about this?
>>>
>>> 2011-01-23 23:12:06.328866 Â log 2011-01-23 23:12:05.316993 osd1
>>> 192.168.1.11:6801/9447 45 : [ERR] 1.1 scrub osd0 missing
>>> 10000017737.00000000/head
>>> 2011-01-23 23:12:06.328866 Â log 2011-01-23 23:12:05.317429 osd1
>>> 192.168.1.11:6801/9447 46 : [ERR] 1.1 scrub stat mismatch, got 7/136
>>> objects, 0/0 clones, 12356/8682277 bytes, 17/8550 kb.
>>> 2011-01-23 23:12:08.230768 Â Âpg v129643: 270 pgs: 262 active+clean, 8
>>> active+clean+inconsistent; 877 GB data, 1707 GB used, 1320 GB / 3036
>>> GB avail
>>>
>>> I would expect ceph to fix the inconsistent PGs at this point, but it
>>> just continues background scrubbing. Does inconsistent data get
>>> cleaned up automatically from other replicas, or is there something
>>> that I need to fix manually here?
>>>
>>> --Ravi
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html
>>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux