> -----Original Message----- > From: Anand Bhat [mailto:anand.bhat@xxxxxxxxx] > Sent: 07 July 2016 13:46 > To: nick@xxxxxxxxxx > Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx> > Subject: Re: RBD - Deletion / Discard - IO Impact > > These are known problem. > > Are you doing mkfs.xfs on SSD? If so, please check SSD data sheets whether UNMAP is supported. To avoid unmap during mkfs, use > mkfs.xfs -K Thanks for your reply The RBD's are on normal spinners (+SSD Journals) > > Regards, > Anand > > On Thu, Jul 7, 2016 at 5:23 PM, Nick Fisk <mailto:nick@xxxxxxxxxx> wrote: > Hi All, > > Does anybody else see a massive (ie 10x) performance impact when either deleting a RBD or running something like mkfs.xfs against > an existing RBD, which would zero/discard all blocks? > > In the case of deleting a 4TB RBD, I’m seeing latency in some cases rise up to 10s. > > It looks like it the XFS deletions on the OSD which are potentially responsible for the massive drop in performance as I see random > OSD’s in turn peak to 100% utilisation. > > I’m not aware of any throttling than can be done to reduce this impact, but would be interested to here from anyone else that may > experience this. > > Nick > > > > _______________________________________________ > ceph-users mailing list > mailto:ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > -- > ---------------------------------------------------------------------------- > Never say never. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com