Re: List pg with heavily degraded objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/09/2021 15:37, Janne Johansson wrote:
Den fre 10 sep. 2021 kl 14:27 skrev George Shuklin <george.shuklin@xxxxxxxxx>:
On 10/09/2021 15:19, Janne Johansson wrote:
Are there a way? pg list is not very informative, as it does not show
how badly 'unreplicated' data are.
ceph pg dump should list all PGs and how many active OSDs they have in
a list like this:
[12,34,78,56], [12,34,2134872348723,56]

It's not about been undersized.
Imagine a small cluster with three OSD. You have two OSD dead, than two
more empty were added to the cluster.
Normally you'll see that each PG found a peer and there are no
undersized PGs. But data, actually, wasn't replicated yet, the
replication is in the process.
My view is that they actually would be "undersized" until backfill is
done to the PGs on the new empty disks you just added.

I've just created a counter-example for that.

Each server has 2 OSD, default replicated_rules.

There is 4 servers, pool size is 3.

* shutdown srv1, wait for recovery, shutdown srv2, wait for recovery.

* put some big amount of data (enough to see replication traffic), all data are in srv3+srv4 with degrade.

* shutdown srv3, start srv1, srv2.  srv4 is a single server with all data available.

I can see no 'undersized' PG, but data ARE in a single copy: https://gist.github.com/amarao/fbc8ef3538f66a9f2c264f8555f5c29a


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux