Re: RBD - Deletion / Discard - IO Impact

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



These are known problem. 

Are you doing mkfs.xfs on SSD? If so, please check SSD data sheets whether UNMAP is supported. To avoid unmap during mkfs, use mkfs.xfs -K

Regards,
Anand

On Thu, Jul 7, 2016 at 5:23 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:

Hi All,

 

Does anybody else see a massive (ie 10x) performance impact when either deleting a RBD or running something like mkfs.xfs against an existing RBD, which would zero/discard all blocks?

 

In the case of deleting a 4TB RBD, I’m seeing latency in some cases rise up to 10s.

 

It looks like it the XFS deletions on the OSD which are potentially responsible for the massive drop in performance as I see random OSD’s in turn peak to 100% utilisation.

 

I’m not aware of any throttling than can be done to reduce this impact, but would be interested to here from anyone else that may experience this.

 

Nick



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
----------------------------------------------------------------------------
Never say never.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux