Re: cleaner optimization and online defragmentation: status update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

> First of all, I am thinking now and I will think that defragmentation should be
> a part of GC activity. Usually, from my point of view, users choose NILFS2
> because of using flash storage (SSD and so on). So, GC is a consequence of
> log-structured nature of NILFS2. But it needs to think about flash aging, anyway.
> Because activity of GC or other auxiliary subsystems should take into account
> NAND flash wear-leveling. If activity of auxiliary subsystems will be significant
> then NAND flash will fail soon without any clear reasons from an user's
> viewpoint.

I have no idea of the actual implementation or the code, so my comment
just represents the POV of a fs-user.
I've tried Nilfs2 and Btrfs (both cow) on traditional mechanical
harddisks to get cheap, efficient snapshoting.
However, due to cow, files with high random-write activity (like
firefox's internal sqlite database) fragmented so heavily those
filesystems were practiacally unuseable on HDDs.

Btrfs actually offers both, manual defragmentation as well as an
autodefrag mount option - which is useful even for SSDs when the
average continous segment size becomes as low as 4kb.
While autodefrag would be great for nilfs2, a manual tool at least
wouldn't hurt ;)

Regards
--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux BTRFS]     [Linux CIFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux