just one MON
On Wed, Jun 28, 2017 at 8:05 PM, Brad Hubbard <bhubbard@xxxxxxxxxx> wrote:
On Wed, Jun 28, 2017 at 10:18 PM, Mazzystr <mazzystr@xxxxxxxxx> wrote:
> The corruption is back in mons logs...
>
> 2017-06-28 08:16:53.078495 7f1a0b9da700 1 leveldb: Compaction error:
> Corruption: bad entry in block
> 2017-06-28 08:16:53.078499 7f1a0b9da700 1 leveldb: Waiting after background
> compaction error: Corruption: bad entry in block
Is this just one MON, or is it in the logs of all of your MONs?
> ______________________________
>
>
> On Tue, Jun 27, 2017 at 10:42 PM, Mazzystr <mazzystr@xxxxxxxxx> wrote:
>>
>> 22:16 ccallegar: good grief...talk about a handful of sand in your eye!
>> I've been chasing down a "leveldb: Compaction error: Corruption: bad entry
>> in block " in mons logs...
>> 22:17 ccallegar: I ran a python leveldb.repair() and restarted osd's and
>> mons and my cluster crashed and burned
>> 22:18 ccallegar: a couple files ended up in leveldb lost dirs. The path
>> is different if it's a mons or osd
>> 22:19 ccallegar: for mons logs showed a MANIFEST file missing. I moved
>> the file that landed in lost back to normal position, chown'd ceph:ceph,
>> restarted mons and mons came back online!
>> 22:21 ccallegar: osd logs showed a sst file missing. looks like
>> leveldb.repair() does the needful but names the new file a .ldb. I renamed
>> the file, chown'd ceph:ceph, restarted osd and they came back online!
>>
>> leveldb corruption log entries have gone away and my cluster is recovering
>> it's way to happiness.
>>
>> Hopefully this helps someone else out
>>
>> Thanks,
>> /Chris
>>
>>
>> On Tue, Jun 27, 2017 at 6:39 PM, Mazzystr <mazzystr@xxxxxxxxx> wrote:
>>>
>>> Hi Ceph Users,
>>> I've been chasing down some levelDB corruption messages in my mons logs.
>>> I ran a python leveldb repair on mon and odd leveldbs. The job caused a
>>> files to disappear and a log file to appear in lost directory. Mon and
>>> osd's refuse to boot.
>>>
>>> Ceph version is kraken 11.02.
>>>
>>> There's not a whole lot of info on the internet regarding this. Anyone
>>> have any ideas on how to recover the mess?
>>>
>>> Thanks,
>>> /Chris C
>>
>>
>
>
_________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>
--
Cheers,
Brad
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com