On Sun, 2009-08-16 at 08:34 -0700, Arjan van de Ven wrote: > On Sun, 16 Aug 2009 15:05:30 +0100 > Alan Cox <alan@xxxxxxxxxxxxxxxxxxx> wrote: > > > On Sat, 15 Aug 2009 08:55:17 -0500 > > James Bottomley <James.Bottomley@xxxxxxx> wrote: > > > > > On Sat, 2009-08-15 at 09:22 -0400, Mark Lord wrote: > > > > James Bottomley wrote: > > > > > > > > > > This means you have to drain the outstanding NCQ commands > > > > > (stalling the device) before you can send a TRIM. If we do > > > > > this for every discard, the performance impact will be pretty > > > > > devastating, hence the need to coalesce. It's nothing really > > > > > to do with device characteristics, it's an ATA protocol problem. > > > > .. > > > > > > > > I don't think that's really much of an issue -- we already have > > > > to do that for cache-flushes whenever barriers are enabled. Yes > > > > it costs, but not too much. > > > > > > That's not really what the enterprise is saying about flush > > > barriers. True, not all the performance problems are NCQ queue > > > drain, but for a steady workload they are significant. > > > > Flush barriers are nightmare for more than enterprise. You drive > > basically goes for a hike for a bit which trashes interactivity as > > well. If the device can't do trim and the like without a drain I > > don't see much point doing it at all, except maybe to wait for idle > > devices and run a filesystem managed background 'strimmer' thread to > > just weed out now idle blocks that have stayed idle - eg by adding an > > inode of all the deleted untrimmed blocks and giving it an irregular > > empty ? > > trim is mostly for ssd's though, and those tend to not have the "goes > for a hike" behavior as much...... Well, yes and no ... a lot of SSDs don't actually implement NCQ, so the impact to them will be less ... although I think enterprise class SSDs do implement NCQ. > I wonder if it's worse to batch stuff up, because then the trim itself > gets bigger and might take longer..... So this is where we're getting into the realms of speculation. There really are only about a couple of people out there with trim implementing SSDs, so that's not really enough to make any judgement. However, the enterprise has been doing UNMAP for a while, so we can draw inferences from them since the SSD FTL will operate similarly. For them, UNMAP is the same cost in terms of time regardless of the number of extents. The reason is that it's moving the blocks from the global in use list to the global free list. Part of the problem is that this involves locking and quiescing, so UNMAP ends up being quite expensive to the array but constant in terms of cost (hence they want as few unmaps for as many sectors as possible). For SSDs, the FTL has to have a separate operation: erase. Now, one could see the correct implementation simply moving the sectors from the in-use list to the to be cleaned list and still do the cleaning in the background: that would be constant cost (but, again, likely expensive). Of course, if SSD vendors decided to erase on the spot when seeing TRIM, this wouldn't be true ... James -- To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html