Re: [PATCH 1/6] hfsplus: prevent btree data loss on root split

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 26, 2018 at 09:58:49AM -0700, Viacheslav Dubeyko wrote:
> On Thu, 2018-10-25 at 16:42 -0300, Ernesto A. Fernández wrote:
> > On Thu, Oct 25, 2018 at 09:51:57AM -0700, Viacheslav Dubeyko wrote:
> > > 
> > > On Tue, 2018-10-23 at 22:32 -0300, Ernesto A. Fernández wrote:
> > > > 
> > > > On Tue, Oct 23, 2018 at 06:23:53PM -0700, Viacheslav Dubeyko
> > > > wrote:
> > > > > 
> > > > > 
> > > > > On Fri, 2018-08-31 at 00:58 -0300, Ernesto A. Fernández wrote:
> > > > > > 
> > > > > > 
> > > > > > Creating, renaming or deleting a file may cause catalog
> > > > > > corruption and
> > > > > > data loss.  This bug is randomly triggered by xfstests
> > > > > > generic/027, but
> > > > > > here is a faster reproducer:
> > > > > > 
> > > > > >   truncate -s 50M fs.iso
> > > > > >   mkfs.hfsplus fs.iso
> > > > > >   mount fs.iso /mnt
> > > > > >   i=100
> > > > > >   while [ $i -le 150 ]; do
> > > > > >     touch /mnt/$i &>/dev/null
> > > > > >     ((++i))
> > > > > >   done
> > > > > >   i=100
> > > > > >   while [ $i -le 150 ]; do
> > > > > >     mv /mnt/$i /mnt/$(perl -e "print $i x82") &>/dev/null
> > > > > >     ((++i))
> > > > > >   done
> > > > > >   umount /mnt
> > > > > >   fsck.hfsplus -n fs.iso
> > > > > > 
> > > > > > The bug is triggered whenever hfs_brec_update_parent() needs
> > > > > > to
> > > > > > split
> > > > > > the root node.  The height of the btree is not increased,
> > > > > > which
> > > > > > leaves
> > > > > > the new node orphaned and its records lost.
> > > > > > 
> > > > > > Signed-off-by: Ernesto A. Fernández <ernesto.mnd.fernandez@gm
> > > > > > ail.
> > > > > > com>
> > > > > > ---
> > > > > >  fs/hfsplus/brec.c | 4 ++++
> > > > > >  1 file changed, 4 insertions(+)
> > > > > > 
> > > > > > diff --git a/fs/hfsplus/brec.c b/fs/hfsplus/brec.c
> > > > > > index ed8eacb34452..aa17a392b414 100644
> > > > > > --- a/fs/hfsplus/brec.c
> > > > > > +++ b/fs/hfsplus/brec.c
> > > > > > @@ -429,6 +429,10 @@ static int hfs_brec_update_parent(struct
> > > > > > hfs_find_data *fd)
> > > > > >  	if (new_node) {
> > > > > >  		__be32 cnid;
> > > > > >  
> > > > > > +		if (!new_node->parent) {
> > > > > > +			hfs_btree_inc_height(tree);
> > > > > > +			new_node->parent = tree->root;
> > > > > I worry about the case when we are adding the node on
> > > > > intermediate
> > > > > (not
> > > > > root) level. As far as I can see, we will be in trouble here
> > > > > because I
> > > > > don't see any processing of two possible cases: (1) root node;
> > > > > (2)
> > > > > node
> > > > > of intermediate level. Do I miss something here?
> > > > If 'new_node' had been the result of splitting a node other than
> > > > root,
> > > > then it would have a parent.
> > > > 
> > > Could you please share the callstack or/and more detailed
> > > explanation
> > > that would show the correctness of the fix? Currently, it's not
> > > enough
> > > details in the comment for easy understanding the issue's
> > > environment
> > > and correctness of the fix. 
> > This patch is two months old now.  Why did you wait until the merge
> > window to give me your traditional "the patch is confusing" reply?
> > 
> > > 
> > > I simply mean here that as implementation
> > > as concept of the HFS+ b-trees is not trivial. But everybody should
> > > easy understand the patch.
> > Those two sentences are not compatible.  Reviewers need to have some
> > understanding of the code they review.  No royal road...
> > 
> 
> If you've found an issue or a bug then you did see some error message
> and/or callstack. It's not big deal to share it. Moreover, it's
> important to include such error message, callstack and issue's details
> in the comment section of the patch. Let's imagine that somebody will
> use the same test-case and this guy would find another issue. So, how
> will it be possible to distinguish your issue with another one for the
> same test-case? Does your fix resolve another issue or not? It will be
> impossible to realize this without error message or something. I didn't
> ask to share some "secret" knowledge. But to share more details about
> the issue is really important. Could you please share more details
> (error message, callstack and so on) about the found issue? Even if we
> are talking about silent corruption then the fsck tool complains about
> something. And, finally, you've found some path in the code (callstack)
> that is the reason of such corruption. Again, it's not big deal to
> share these details.

I didn't remember the fsck error, so I rebuilt without the patch and ran
the reproducer again.  No idea why you couldn't do that yourself.

The error is "Invalid sibling link (4, 3)".  Have fun with that...
 
> Thanks,
> Vyacheslav Dubeyko. 
> 



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux