Re: [static superblock discussion] Does nilfs2 do any in-place writes?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Clemens,

On Thu, 2014-01-30 at 14:09 +0100, Clemens Eisserer wrote:
> Hi Vyacheslav,
> 
> 
> > I suppose that current implementation is not bad. And it is possible
> > to give what you want by simple management of superblock's
> > update timeout. Because and now superblock is updated on mount/umount
> > and with frequency is defined by some timeout.
> 
> What would happen in case of an unclean shutdown and a very large
> superblock update intervall (several hours)?
> As far as I understood, this is where Andreas' patch would come into play?
> 

The result of unclean shutdown and big update timeout will be a long
mount. And such issue was reported earlier and it was fixed. I don't
think that Andreas's patch can resolve long mount principally. Maybe
this approach can slightly reduce mount time in such situation.

> > Ok. But I can't see anything bad for my approach. Because primary reserved
> > area will be 8MB. So, if super root (and all other info) is 4KB, for example, then
> > we can do 2048 write operations without any erase operations.
> 
> The problem with this approach is, that there is a minimal write unit
> which size is dependant on the FTL - explained in the linaro wiki:
> 
> > The smallest write unit is significantly larger than a page.
> > Reading or writing less than one of these units causes a full unit to be accessed.
> > Trying to do streaming write in smaller units causes the medium to do multiple
> > read-modify-write cycles on the same write unit, which in turn causes multiple
> > garbage collection cycles for writing a single allocation group from start to end.
> 
> So updating 4kb pages in a linear fashion would cause
> read-modify-write cycles on most devices, with blocks as large as the
> mapping unit (for SD cards this often means a full erase block of
> several MBs).
> The chapter "FAT optimization" lists several of those caveats, I found
> it a very intersting and worthwhile reading.
> 

In such case NILFS2 at whole is in trouble. Because partial segments can
have different size. And these sizes doesn't correlate with sizes of
physical erase block or physical writing units. And the whole COW
approach is useless. Maybe some NAND chips have writing units larger
than page size. But in such case play on the on FTL side anyway.
Otherwise, it needs operate with pure NAND for the best efficiency. And
it is out of the NILFS2's scope.

With the best regards,
Vyacheslav Dubeyko.

> 
> Regards, Clemens
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux BTRFS]     [Linux CIFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux