--- On Mon, 4/16/12, Sven Eschenberg <sven@xxxxxxxxxxxxxxxxxxxxx> wrote: > An erase block is a continuous block of sectors. As long as > not all FS > blocks covering this erase block were trimmed, the advantage > of trimming > is anihilated (for obvious reasons). As an example, assume a > 64k erase > page size and 4k FS-blocks. Now erase a big file (i.e. 1GB), > while only > trimming 1% of the covered space randomly. The probability > that you'd TRIM > multiple sets of 16 continuous FS blocks (if we assume > proper alignment) > when only trimming 1% of the 1GB file is next to zero. If > the FS blocksize > is smaller and the erase page size bigger, it's even worse. To me, due to the "virtual" LBA table allocation handled internally by the ssd controller and in the case TRIM is allowed and fully used accross the entire device, the big file of 1GB is already erase-block fragmented. That is the purpose of the garbage collector: aggregating valid pages to isolate discarded pages inside future "erasable" blocks. In my case, the probability is maybe next to zero (not sure about that), right after sending the TRIM commands on the 1% percent of the big file. But after a certain amount of time, the garbage collector will un-puzzle all the mess and help the controller to erase trimmed blocks (those 1% aggregated with another erasable pages). > As Arno already said, all you can do is weigh out leakage > versus performance. Weighing out really is the most difficult part when you have no tangible data: how much is it difficult to break a TRIMed and crypted device? _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx http://www.saout.de/mailman/listinfo/dm-crypt