Re: unknown PGs after adding hosts in different subtree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi again,

I'm still wondering if I misunderstand some of the ceph concepts. Let's assume the choose_tries value is too low and ceph can't find enough OSDs for the remapping. I would expect that there are some PG chunks in remapping state or unknown or whatever, but why would it affect the otherwise healthy cluster in such a way? Even if ceph doesn't know where to put some of the chunks, I wouldn't expect inactive PGs and have a service interruption.
What am I missing here?

Thanks,
Eugen

Zitat von Eugen Block <eblock@xxxxxx>:

Thanks, Konstantin.
It's been a while since I was last bitten by the choose_tries being too low... Unfortunately, I won't be able to verify that... But I'll definitely keep that in mind, or least I'll try to. :-D

Thanks!

Zitat von Konstantin Shalygin <k0ste@xxxxxxxx>:

Hi Eugen

On 21 May 2024, at 15:26, Eugen Block <eblock@xxxxxx> wrote:

step set_choose_tries 100

I think you should try to increase set_choose_tries to 200
Last year we had an Pacific EC 8+2 deployment of 10 racks. And even with 50 hosts, the value of 100 not worked for us


k


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux