Re: radosgw stopped working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alwin,

Not to move too far away from the topic, but just wondering if there are any other recommendations regarding HDDs and backfills? 
We are currently doing in place node replacements and are quite on the aggressive side with our cluster when it comes to backfilling HDDs. So far we have seen target disks handle more than 20 parallel backfills with no issues. 
I’m just wondering if that’s pushing the limits too far. 

Rok, please let us know how things progress once the backfilling calms down. Very interested if you eventually get any luck with the plankton.


Best,
Laimis J.


> On 22 Dec 2024, at 21:46, Alwin Antreich <alwin.antreich@xxxxxxxx> wrote:
> 
> Hi Rok,
> 
> On Sun, 22 Dec 2024 at 20:19, Rok Jaklič <rjaklic@xxxxxxxxx> wrote:
> 
>> First I tried with osd reweight, waited a few hours then osd crush
>> reweight, then with pg-umpap from Laimis. Seems to crush reweight was most
>> effective, but not for "all" osds I tried.
>> 
>> Uh, probably I've set ceph config set osd osd_max_backfills to high number
>> in the past, probably better to reduce it to 1 in steps, since now much
>> backfilling is already going on?
>> 
> Every time a backfill finishes, a new one will be placed in the queue. The
> number of backfills won't reduce as long as you don't lower it. You can
> adjust it and see if it improves the backfill process or not (wait an hour
> or two).
> 
> 
>> 
>> Output of commands in attachment.
>> 
> There seems to be a low amount of PGs for the rgw data pool, compared to
> the amount of OSDs. Though it depends on the EC profile and size of a shard
> (`ceph pg <id> query`) if this is really an issue. But in general the
> amount of PGs is important, because too few of them will make them grow
> larger. Hence backfilling a PG will take a longer time and easier tilts the
> usage of OSDs, as the algorithm works by pseudo-randomly placing PGs and
> not taking its size into account.
> 
> I'd wait with the PG adjustment after the backfilling to the HDDs has
> finished, should you need to adjust the number of PGs. As this will create
> more data movement.
> 
> Cheers,
> Alwin
> croit GmbH, https://www.google.com/url?q=https://croit.io/&source=gmail-imap&ust=1735501671000000&usg=AOvVaw1rrBpyKfZiRg5DQjd0OCzn
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux