erasured PG always "peering"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi ceph-devel,

I have been playing with the new erasure code functionality and I have
noticed that the erasure coded PG remains in "peering" state forever. Is
that normal?

I have a scenario with four servers each with two OSDs (total of eight
OSDs). Then I define the extra crush rule to get four different OSDs across
hosts:

rule reedsol_ruleset {
    ruleset 1
    type erasure
    min_size 4
    max_size 4
    step take default
    step chooseleaf firstn 0 type host
    step emit
}

And create a simple pool by:
sudo ceph osd pool create cauchy 1 1 erasure erasure-code-plugin=jerasure
erasure-code-k=2 erasure-code-m=2 erasure-code-technique=cauchy_good
crush_ruleset=reedsol_ruleset

After that, the PG status jumps from "creating" to "peering" and stays
there forever. With "sudo ceph pg dump_stuck" I can see:

pg_stat objects mip degr unf bytes log disklog state state_stamp v reported
up up_primary acting acting_primary last_scrub scrub_stamp last_deep_scrub
deep_scrub_stamp
5.0 0 0 0 0 0 0 0 peering 2014-03-12 10:30:36.717544 0'0 58:3 [0,6,2,5] 0
[0,6,2,5] 0 0'0 2014-03-12 10:30:36.715589 0'0 2014-03-12 10:30:36.715589

I guess that this is not normal and I'm probably doing something wrong. Any
ideas?

Thanks,
Lluís
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux