Re: Is TRIM/DISCARD going to be a performance problem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 11, 2009 at 10:12:16AM +0200, Jens Axboe wrote:
> 
> I largely agree with this. I think that trims should be queued and
> postponed until the drive is largely idle. I don't want to put this IO
> tracking in the block layer though, it's going to slow down our iops
> rates for writes. Providing the functionality in the block layer does
> make sense though, since it sits between that and the fs anyway. So just
> not part of the generic IO path, but a set of helpers on the side.

Yes, I agree.  However, in that case, we need two things from the
block I/O path.  (A) The discard management layer needs a way of
knowing that the block device has become idle, and (B) ideally there
should be a more efficient method for sending trim requests to the I/O
submission path.  If we batch the results, when we *do* send the
discard requests, we may be sending several hundred discards, and it
would be useful if we could pass into the I/O submission path a linked
list of regions, so the queue can be drained *once*, and then a whole
series of discards can be sent to the device all at once.

Does that make sense to you?

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux