On Tue, May 24, 2011 at 4:32 PM, OGAWA Hirofumi <hirofumi@xxxxxxxxxxxxxxxxxx> wrote: > Kyungmin Park <kmpark@xxxxxxxxxxxxx> writes: > >>>> It's handled at trim implementation. It just trim the fat aware block. >>>> Not trim the blocks which fat doesn't know. >>>> As fat don't use the block 0, 1, it adjust the start block at kernel. >>>> >>>> + if (start < FAT_START_ENT) >>>> + start = FAT_START_ENT; >>>> >>>> and don't exceed the max cluster size. >>>> >>>> + len = (len > sbi->max_cluster) ? sbi->max_cluster : len; >>>> >>>> + for (count = start; count <= len; count++) { >>> >>> Yes. We _adjust_ from 0 to 2 here, so, the end of block also have to be >>> _adjusted_. >>> >>> From other point of view, if userland specified 0 - max-length >>> (i.e. number of blocks), what happens? It would trim block of 2 - >>> (max-length - 2), right? >> >> No, length is not changed. so max-length is used. > > No, no. Userland will know max-length from statvfs, right? So, let's > assume it is 100 (->f_blocks) * 1024 (->f_bsize). > > Now, userland know about max length, 102400, ok? Let's start to trim. > > Assume, userland want to trim whole. So, userland will specify like > > trim(0, 102400). > > What happen in kernel actually? > > Current implement doesn't map blocks. So, in the case of FAT, it adjusts > from 0 to 2 * 1024. > > So, it trims between 2048 and 102400. The problem is here. FS layout is > actually, 2048 and (102400 + 2048). I.e. actually userland has to do > > trim(2048, 102400 + 2048) Umm maybe first implementation does as like this, but Lukas mentioned it's wrong. So I modified it for batched discard concept. You want the loop like this for (count = start; count <= (start + len); count++) > > to specify whole. How to know 2048? > > See what I'm saying? > > FAT has liner block space, so the problem is small against mapping. But > other FSes has bigger problem. > > Thanks. > -- > OGAWA Hirofumi <hirofumi@xxxxxxxxxxxxxxxxxx> > -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html