Re: fast_read in EC pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 26, 2018 at 2:59 PM Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx> wrote:

>     Does this match expectations?
>
>
> Can you get the output of eg "ceph pg 2.7cd query"? Want to make sure the backfilling versus acting sets and things are correct.

You'll find attached:
query_allwell)  Output of "ceph pg 2.7cd query" when all OSDs are up and everything is healthy.
query_one_host_out) Output of "ceph pg 2.7cd query" when OSDs 164-195 (one host) are down and out.

Yep, that's what we want to see. So when everything's well, we have OSDs 91, 63, 33, 163, 192, 103. That corresponds to chassis 3, 2, 1, 5, 6, 4.

When marking out a host, we have OSDs 91, 63, 33, 163, 123, UNMAPPED. That corresponds to chassis 3, 2, 1, 5, 4, UNMAPPED.

So what's happened is that with the new map, when choosing the home for shard 4, we selected host 4 instead of host 6 (which is gone). And now shard 5 can't map properly. But of course we still have shard 5 available on host 4, so host 4 is going to end up properly owning shard 4, but also just carrying that shard 5 around as a remapped location.

So this is as we expect. Whew.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux