>>>>> "Mark" == Mark Lord <kernel@xxxxxxxxxxxx> writes: Mark> Surely if a userspace tool and shell-script can accomplish this, Mark> totally lacking real filesystem knowledge, then we should be able Mark> to approximate it in kernel space? It's the splitting and merging on stacked devices that's the hard part. Something wiper.sh does not have to deal with. And thanks to differences in the protocols the SCSI-ATA translation isn't a perfect fit. Every time TRIM comes up the discussion turns into how much we suck at it because we don't support coalescing of discontiguous ranges. However, we *do* support discarding contiguous ranges of up to about 2GB per command on ATA. It's not like we're issuing a TRIM command for every sector. For offline/weekly reclaim/FITRIM we have the full picture when the discard is issued. And thus we have the luxury of being able to send out relatively big contiguous discards unless the filesystem is insanely fragmented. For runtime discard usage we'll inevitably be issuing lots of itty-bitty 512 or 4KB single-command discards. That's going to suck for performance on your average ATA SSD. Doctor, it hurts when I do this... So assuming we walk the filesystem to reclaim space on ATA SSDs on a weekly basis (since that's the only sane approach): What is the performance impact of not coalescing discontiguous block ranges when cron scrubs your /home at 4am Sunday morning? That, to me, is the important question. That obviously depends on the SSD, filesystem, fragmentation and so on. Is the win really big enough to justify a bunch of highly intrusive changes to our I/O stack? Thanks to PCIe SSDs and other upcoming I/O technologies we're working hard to bring request latency down by simplifying things. Adding complexity seems like a bad idea at this time. And that was the rationale behind the consensus at the filesystem workshop. -- Martin K. Petersen Oracle Linux Engineering -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html