Re: RBD client wallclock profile during 4k random writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 05/10/2017 05:31 PM, Jason Dillaman wrote:
On Wed, May 10, 2017 at 6:10 PM, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
1) 7 - tp_librbd, line 82

Lots of stuff going on here, but the big thing is all the time spent in
librbd::ImageCtx::write_to_cache.  70.2% of the total time in this thread is
spent in ObjectCacher::writex with lots of nested stuff, but if you look all
the way down on line 1293, another 11.8% of the time is spent in Locker()
and 1.5% of the time spent in ~Locker().

Yes -- the ObjectCacher is long overdue for a re-write since it's
single threaded. It looks like you were essentially performing
writethrough as well. I'd imagine you would just be better off
disabling the rbd cache when doing high-performance random write
workloads since you are going to get zero benefit from the cache with
that workload -- at least that's what I usually recommend.


Often I do turn rbd cache off for bluestore testing. This was an older conf file where I inadvertently hadn't disabled it. Still, it's an unfortunate choice that has to be made, potentially by someone other than the user running the workload. :/

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux