Re: erasured PG always "peering"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Glad to hear it's working for you now ;-)

There are important bug fixes daily: it is worth getting the latest Firefly from https://github.com/ceph/ceph/tree/firefly

If you run into a problem again, it would be great if you could preserve the environment in which it happens and post a bug report (even a terse one) at http://tracker.ceph.com/projects/ceph/issues/new

Cheers

On 12/03/2014 21:41, Lluís Pàmies i Juárez wrote:
> Thanks Loic,
> 
> Noted the indep change.
> 
> The thing is that I deleted the pool and created it again and now it
> works well (both for indep and firstn...). The cluster hasn't change
> since I had the problem before nor have I restarted any of the
> daemons. I wander if maybe the OSDs needed a "few hours" for
> peering...
> 
> I'm using a version I cloned from github a few days ago:
> ceph version 0.77-623-gd67a9ad (d67a9adae31da3d07a697b73878c368d2127f1da)
> 
> Best,
> Lluis
> 
> 
> On Wed, Mar 12, 2014 at 1:05 PM, Loic Dachary <loic@xxxxxxxxxxx> wrote:
>> Hi
>>
>> On 12/03/2014 18:40, Lluís Pàmies i Juárez wrote:
>>> Hi ceph-devel,
>>>
>>> I have been playing with the new erasure code functionality and I have
>>> noticed that the erasure coded PG remains in "peering" state forever. Is
>>> that normal?
>>>
>>> I have a scenario with four servers each with two OSDs (total of eight
>>> OSDs). Then I define the extra crush rule to get four different OSDs across
>>> hosts:
>>>
>>> rule reedsol_ruleset {
>>>     ruleset 1
>>>     type erasure
>>>     min_size 4
>>>     max_size 4
>>>     step take default
>>>     step chooseleaf firstn 0 type host
>>
>> You want to s/firstn/indep/ here. But I don't think it's the source of your problem.
>>
>> From what version are you running (hash of the last commit) ?
>>
>> Cheers
>>
>>>     step emit
>>> }
>>>
>>> And create a simple pool by:
>>> sudo ceph osd pool create cauchy 1 1 erasure erasure-code-plugin=jerasure
>>> erasure-code-k=2 erasure-code-m=2 erasure-code-technique=cauchy_good
>>> crush_ruleset=reedsol_ruleset
>>>
>>> After that, the PG status jumps from "creating" to "peering" and stays
>>> there forever. With "sudo ceph pg dump_stuck" I can see:
>>>
>>> pg_stat objects mip degr unf bytes log disklog state state_stamp v reported
>>> up up_primary acting acting_primary last_scrub scrub_stamp last_deep_scrub
>>> deep_scrub_stamp
>>> 5.0 0 0 0 0 0 0 0 peering 2014-03-12 10:30:36.717544 0'0 58:3 [0,6,2,5] 0
>>> [0,6,2,5] 0 0'0 2014-03-12 10:30:36.715589 0'0 2014-03-12 10:30:36.715589
>>>
>>> I guess that this is not normal and I'm probably doing something wrong. Any
>>> ideas?
>>>
>>> Thanks,
>>> Lluís
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
>> --
>> Loïc Dachary, Artisan Logiciel Libre
>>

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux