On Thu, Jun 08, 2017 at 03:20:57AM +0100, Al Viro wrote: > On Wed, Jun 07, 2017 at 05:35:31PM -0700, Richard Narron wrote: > > > I am willing to test. I just turned on UFS_FS_WRITE for the very first time > > running 4.12-rc4 and was able to copy a file of more than 2GB from one r/o > > FreeBSD subpartition to another r/w FreeBSD subpartition. > > > > So it is already looking pretty good. > > The nasty cases are around short files, especially short files with holes. > Linear writes as done by cp(1) will do nothing worse than bogus i_blocks > (and possibly mangled counters in cylinder groups). Random write access > to short files, OTOH, steps into a lot more codepaths... > > As for ->i_blocks, it triggers this: > > root@kvm1:/mnt# df .; mkdir a; rmdir a; df . > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/loop0 507420 4504 462340 1% /mnt > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/loop0 507420 4536 462308 1% /mnt > > Note the 32Kb (== one block on that ufs2) leaked here. > Every iteration will leak another one. Similar for long > symlinks... Spot the bogosity: static inline int _ubh_isblockset_(struct ufs_sb_private_info * uspi, struct ufs_buffer_head * ubh, unsigned begin, unsigned block) { switch (uspi->s_fpb) { case 8: return (*ubh_get_addr (ubh, begin + block) == 0xff); case 4: return (*ubh_get_addr (ubh, begin + (block >> 1)) == (0x0f << ((block & 0x01) << 2))); case 2: return (*ubh_get_addr (ubh, begin + (block >> 2)) == (0x03 << ((block & 0x03) << 1))); case 1: return (*ubh_get_addr (ubh, begin + (block >> 3)) == (0x01 << (block & 0x07))); } return 0; } with static inline void _ubh_setblock_(struct ufs_sb_private_info * uspi, struct ufs_buffer_head * ubh, unsigned begin, unsigned block) { switch (uspi->s_fpb) { case 8: *ubh_get_addr(ubh, begin + block) = 0xff; return; case 4: *ubh_get_addr(ubh, begin + (block >> 1)) |= (0x0f << ((block & 0x01) << 2)); return; case 2: *ubh_get_addr(ubh, begin + (block >> 2)) |= (0x03 << ((block & 0x03) << 1)); return; case 1: *ubh_get_addr(ubh, begin + (block >> 3)) |= (0x01 << ((block & 0x07))); return; } } The only saving grace is that UFS defaults to 8 fragments per block...