Re: [PATCH] hfsplus: fix "unused node is not erased" error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2014-05-21 at 20:15 +0200, Sergei Antonov wrote:
> On 21 May 2014 18:40, Vyacheslav Dubeyko <slava@xxxxxxxxxxx> wrote:
> > On Tue, 2014-05-20 at 19:44 +0200, Sergei Antonov wrote:
> >
> > [snip]
> >>
> >> -int hfsplus_file_extend(struct inode *inode)
> >> +int hfsplus_file_extend(struct inode *inode, bool zeroout)
> >>  {
> >>       struct super_block *sb = inode->i_sb;
> >>       struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
> >> @@ -463,6 +463,12 @@ int hfsplus_file_extend(struct inode *inode)
> >>               }
> >>       }
> >>
> >> +     if (zeroout) {
> >> +             res = sb_issue_zeroout(sb, start, len, GFP_NOFS);
> >
> > As I can see, sb_issue_zeroout() initiate request for write. But
> > previously the hfsplus_file_extend() operated by page cache only during
> > file extending. From one point of view, we can fail during operation of
> > file extending but, anyway, we will zero out blocks by means of writing.
> 
> Which is not bad. Those blocks are free space.
> 

For me personally, proper place for sb_issue_zeroout() can be in
hfs_bmap_alloc() method
(http://lxr.free-electrons.com/source/fs/hfsplus/btree.c#L364):


        while (!tree->free_nodes) {
                struct inode *inode = tree->inode;
                struct hfsplus_inode_info *hip = HFSPLUS_I(inode);
                u32 count;
                int res;

                res = hfsplus_file_extend(inode);
                if (res)
                        return ERR_PTR(res);

                /* here can be added sb_issue_zeroout() call */

                hip->phys_size = inode->i_size =
                        (loff_t)hip->alloc_blocks <<
                                HFSPLUS_SB(tree->sb)->alloc_blksz_shift;
                hip->fs_blocks =
                        hip->alloc_blocks << HFSPLUS_SB(tree->sb)->fs_shift;
                inode_set_bytes(inode, inode->i_size);
                count = inode->i_size >> tree->node_size_shift;
                tree->free_nodes = count - tree->node_count;
                tree->node_count = count;
        }

First of all, here we know that trying to extend file was successful.
And, secondly, hfs_bmap_alloc() method is dedicated b-tree case only.
I think that modification of hfsplus_file_extend() is not very good
idea. The hfs_bmap_alloc() method is more clear solution, from my
viewpoint.

> > From another point of view, prepared pages are returned as tree's nodes
> > for filling by some data and, finally, it will be written on volume as a
> > result of node creation.
> 
> A result of node creation is only 1 node, but catalog file is expanded
> in clumps. Normally a clump is at least several megabytes. So the task
> is to zero these megabytes on disk before (or immediately after) the
> new extent is added to the catalog.
> 
> > So, I think that it makes sense to zero out namely prepared pages but
> > not to initiate request for write via sb_issue_zeroout().
> 
> You mean mapping pages, do memset(,0,) and flushing them? Slower,
> memory consuming, complicated.
> 

I worried here about consistency between block state and memory page
state during a new node allocation. But as I can see
__hfs_bnode_create() zero out memory page during node creation
(http://lxr.free-electrons.com/source/fs/hfsplus/bnode.c#L421). So, all
should be OK.

Thanks,
Vyacheslav Dubeyko.


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux