Re: PG states question and improving peering times

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den mån 22 nov. 2021 kl 16:36 skrev Stephen Smith6 <esmith@xxxxxxx>:
>
> I believe this is a fairly straight-forward question, but is it true that any PG not in "active+..." (Peering, down, etc.) blocks writes to the entire pool?

I'm not sure if this is strictly true, but in the example of say a VM
having a 40G rbd image as it's harddrive, it will split this into some
10000 4M pieces which will be spread across your PGs, and if you have
less than 10000 PGs in the rbd pool, these pieces will end up on all
PGs. So, if one or more PGs are inactive in some way, then it would
only be a matter of time before reads or writes on this VM hits one of
the inactive PGs and stops there.

Since most of the data is spread around regardless of if you do rgw,
rbd, cephfs and so on, the effect might as well feel like "one bad PG
stops the pool" since all the load balancing done in the cluster will
make your clients get stuck on the inactive PGs sooner or later.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux