Re: Increase number of objects in flight during recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tors 3 dec. 2020 kl 10:11 skrev Frank Schilder <frans@xxxxxx>:

> I have the opposite problem as discussed in "slow down keys/s in
> recovery". I need to increase the number of objects in flight during
> rebalance. It is already all remapped PGs in state backfilling, but it
> looks like no more than 8 objects/sec are transferred per PG at a time. The
> pools sits on high-performance SSDs and could easily handle a transfer of
> 100 or more objects/sec simultaneously. Is there any way to increase the
> number of transfers/sec or simultaneous transfers? Increasing the options
> osd_max_backfills and osd_recovery_max_active has no effect.
> Background: The pool in question (con-fs2-meta2) is the default data pool
> of a ceph fs, which stores exclusively the kind of meta data that goes into
> this pool. Storage consumption is reported as 0, but the number of objects
> is huge:
>

I don't run cephfs so it might not map 100%, but I think that pools for
which ceph stores file/object metadata (radosgw pools in my case) will show
a completely "false" numbers while recovering, which I think is because
there are tons of object metadata applied as metadata on 0-sized objects.
This means recovery will look like it does one object per second or
something, while in fact it does 100s of metadatas on that one object but
the recovery doesn't list this. Also, it made old ceph df and rados df say
"this pool is almost empty" but when you try to dump or move the pool it
takes far longer than it should take to move an almost-empty pool. And the
pool dump gets huge.

I would take a look at iostat output for those OSD drives and see if there
are 8 iops or lots more actually.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux