Re: rbd performance drop a lot with objectmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 10, 2017 at 9:50 AM, LIU, Fei <james.liu@xxxxxxxxxxxxxxx> wrote:
> With FIO single job queue depth 1(W/ vs W/O)  , IOPS drop 3 times and
> latency increased 3 times. As with more jobs, the IOPS drops more and
> latency increase higher and higher.

Assuming you have a random write workload, within a QD=1, it is
entirely expected for the first write to the object to incur a
performance penalty since it requires an additional round-trip
operation to the backing OSDs. Since you only hit this penalty for the
very first write to the object, its cost is amortized over future
writes. This is similar to cloned images and the amortized cost of
copying up the backing parent object to the clone on the first write.

> The objectmap_locker in pre and post and objectmap update per IO really hurt
> performance. Lockless queue and new way to caching objectmap? Any thoughts?

By definition, with a QD=1, there is zero contention on that lock. The
lock is really only held for a minuscule amount of time and is dropped
while the OSD operation is in-progress. Do you actually have any
performance metrics to back up this claim? Note that the post state is
only hit when you issue a remove / trim / discard operation.

-- 
Jason
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux