Re: RBD with PWL cache shows poor performance compared to cache device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 5 Jul 2023 at 15:18, Mark Nelson <mark.nelson@xxxxxxxxx> wrote:
> I'm sort of amazed that it gave you symbols without the debuginfo
> packages installed.  I'll need to figure out a way to prevent that.
> Having said that, your new traces look more accurate to me.  The thing
> that sticks out to me is the (slight?) amount of contention on the PWL
> m_lock in dispatch_deferred_writes, update_root_scheduled_ops,
> append_ops, append_sync_point(), etc.
>
> I don't know if the contention around the m_lock is enough to cause an
> increase in 99% tail latency from 1.4ms to 5.2ms, but it's the first
> thing that jumps out at me.  There appears to be a large number of
> threads (each tp_pwl thread, the io_context_pool threads, the qemu
> thread, and the bstore_aio thread) that all appear to have potential to
> contend on that lock.  You could try dropping the number of tp_pwl
> threads from 4 to 1 and see if that changes anything.

Will do. Any idea how to do that? I don't see an obvious rbd config option.

Thanks for looking into this,
Matt
-- 
Matthew Booth
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux