On Fri, Nov 19, 2010 at 08:53:25AM -0500, Mark Lord wrote: > There is a very good reason why faster implementations may be *difficult* > (if not impossible) in many cases: DETERMINISTIC trim. This requires > that the drive guarantee the block ranges will return a constant known > value after TRIM. Which means they MUST write to flash during the trim. > And any WRITE to flash means a potential ERASE operation may be needed. Deterministic TRIM is an option. It doesn't have to be implemented. And as you even pointed out, there are ways of doing this intelligently. Whether "intelligently" and "drive firmware authors" are two phrases that should be used in the same sentence is a concern that I will grant, but that's why mount -o discard is not the default. > Non-deterministic TRIM should also try to ensure that the original data > is no longer there (for security reasons), so it may have the same issues. Says who? We've deleted files on hard drives for a long time without scrubbing data blocks. Why should a non-deterministic trim be any different. If the goal is a better performing SSD, and not security, then non-deterministic trim should definitely _not_ ensure that the original data is no longer accessible. - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html