Re: RBD - Deletion / Discard - IO Impact

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 7 Jul 2016 12:53:33 +0100 Nick Fisk wrote:

> Hi All,
> 
>  
> 
> Does anybody else see a massive (ie 10x) performance impact when either
> deleting a RBD or running something like mkfs.xfs against an existing
> RBD, which would zero/discard all blocks?
> 
>  
> 
> In the case of deleting a 4TB RBD, I'm seeing latency in some cases rise
> up to 10s.
> 
>  
> 
> It looks like it the XFS deletions on the OSD which are potentially
> responsible for the massive drop in performance as I see random OSD's in
> turn peak to 100% utilisation.
> 
>  
> 
> I'm not aware of any throttling than can be done to reduce this impact,
> but would be interested to here from anyone else that may experience
> this.
> 
I haven't tested this since firefly and found RBD deletions, discards and
snapshots all to be very expensive operations.

See also:
http://ceph.com/planet/use-discard-with-krbd-client-since-kernel-3-18/

I would think that the unified queue in Jewel would help with this.

But how much this is also an XFS amplification and thus not helped by
proper queuing above I can't tell, all my production OSDs are Ext4. 

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux