Re: [PATCH] hfsplus: fix segfault when deleting all attrs of a file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I really appreciate your feedback, but I would rather you spent more
time trying to understand my work instead of redoing it from scratch.
You are making many of the same mistakes I made when I was first tracking
down this bug:

On Tue, Oct 10, 2017 at 09:39:08PM +0000, Slava Dubeyko wrote:
> The hfs_brec_find() found the pointer (#1) in index node #3 by means of hfs_find_1st_rec_by_cnid strategy.
> Then the hfs_brec_find() found the single record in the leaf node #1 on the basis of the same strategy.
> The most interesting is starting in hfs_brec_remove():
> 
>     if (!--node->num_recs) {
> 	hfs_bnode_unlink(node);
> 	if (!node->parent)
> 	    return 0;
> 	parent = hfs_bnode_find(tree, node->parent);
> 	if (IS_ERR(parent))
> 	    return PTR_ERR(parent);
> 	hfs_bnode_put(node);
> 	node = fd->bnode = parent;
> 
> 	__hfs_brec_find(node, fd, hfs_find_rec_by_key);
> 	goto again;
>     }
> 
> The hfs_brec_remove() is trying to delete the single and latest record from the leaf node #1.
> Finally, the node->num_recs will be equal to zero after the increment operation and
> the condition directs the code inside of this execution path. So, first of all, the logic
> unlink the leaf node #2 from leaf node #1. Then the logic gets the parent node and to put
> the leaf node #1. And, finally, we are trying to find a record in the parent and index
> node #3 on the basis of hfs_find_rec_by_key strategy. We cannot find anything because
> the fd->search_key is not prepared for such search. It means that fd->record is equal to
> -1. The real issue takes place here:
> 
> hfs_bnode_write_u16(node, rec_off + 2, data_off - size);
> 
> because the data_off == 14 and size == 30. We have overflow here and it is the reason
> of the crash. 

No. The reason of the crash is that  rec_off + 2  is an offset outside
the limits of the bnode, and we are writing to it. That's how you get a
segfault.

data_off has become 14 because that is the value that is always saved in
the last two bytes of the bnode. It is the position of the first record,
the weird contant you didn't like in my patch. Now of course, this is
wrong, the last two bytes of the bnode should not have been read at all
here:

  data_off = hfs_bnode_read_u16(node, rec_off);

because rec_off should have been two bytes to the left. But that's a
separate consequence of the fact that fd->record should never have been
-1 in the first place.

The fact is, size will always be smaller than data_off if the code has no
bugs. In fact, when we are deleting the first record of the bnode,
data_off - size will be exactly 14 in the first iteration, which is what
we want stored in the last two bytes.

> But the data_off is lesser of size because of value of fd->record we
> selected improper position of the offset for the changing the value in the offsets table.
> 
> What should be done for the issue fix?
> (1) We need to add the check condition here:
> 
> http://elixir.free-electrons.com/linux/latest/source/fs/hfsplus/brec.c#L217
> 
>     do {
> 	data_off = hfs_bnode_read_u16(node, rec_off);
> /*
>  * TODO: check condition here
>  *       if (data_off < size)

Like I said above, this will never happen if the code is correct, so
the logical response here would be BUG(). Think about it, why would the
offset of a record that comes after the one we are deleting be smaller
than the size of the one we are deleting?

I really don't see any point in having this check at all, though.

>  *           return error here!!!!!
>  */
> 	hfs_bnode_write_u16(node, rec_off + 2, data_off - size);
> 	rec_off -= 2;
>     } while (rec_off >= end_off);
> 
> (2) The logic of hfs_brec_remove() should be reworked here:
> 
> http://elixir.free-electrons.com/linux/latest/source/fs/hfsplus/brec.c#L197
> 
> /*
>  * TODO: rework the logic here
> 
>     if (node->num_recs <= 0)
> 	BUG(); /* it could be error message here */
>     else if (node->num_recs == 1)
> 	__hfs_brec_find(node, fd, hfs_find_1st_rec_by_cnid); /* maybe check the error here */

Why are you searching the bnode if you only have one record left? That's
the one you want. And fd->key is alredy set to it. Also no, there is no
need to check the error, we shouldn't be here at all if there was no
record with that cnid.

> 	memcpy(fd->search_key, fd->key, sizeof(hfsplus_btree_key));

This line is the actual fix (close enough to the one in my patch), but
it's in the wrong place. If you read my last mail (which I have the
feeling you skipped) you'll know that this bug also touches the call
to hfs_brec_update_parent() in line 229. So with your solution there
will be no segfault, but the parent records won't be updated at all
unless they need to be removed.

Also, why are you copying the whole of the hfsplus_btree_key structure,
which is like 300 bytes long, when the valid data is usually only in the
first 30 bytes or so? I don't think it will make a difference but it seems
wasteful, particularly compared to my patch which made use of the
existing infrastructure to handle variable length keys.

> 	--node->num_recs;
>  */
> 
> 	hfs_bnode_unlink(node);
> 	if (!node->parent)
> 	    return 0;
> 	parent = hfs_bnode_find(tree, node->parent);
> 	if (IS_ERR(parent))
> 	    return PTR_ERR(parent);
> 	hfs_bnode_put(node);
> 	node = fd->bnode = parent;
> 
> 	__hfs_brec_find(node, fd, hfs_find_rec_by_key);
> 	goto again;
>     }
> 
> So, we need to extract the key of the latest record from the leaf node by using
> the hfs_find_1st_rec_by_cnid strategy,

Not really. The key of the parent record is also the key of the first
record of the child, so you can just copy that, like my patch does.

> copy fd->key into the fd->search_key
> and then to go into logic of searching the record in the index node on the basis
> of hfs_find_rec_by_key strategy and the deletion of the record from the index node.
> 
> What do you think? Could you rework the patch?

Your solution is oddly convoluted, and is equal to my one-line fix in
the one thing it does that matters. Yet it still fails to solve half
the issue. So I have to say no here.

You can't expect to make a correct patch by just going through callstacks
and debug info; that's just dealing with the symptoms. To fix this for
real you have to actually figure out what the code is doing.

> 
> Thanks,
> Vyacheslav Dubeyko.
> 

Now that you've done your own research: could you please go back to my
patch and point out, with your improved understanding of the issue, what
it is exactly that you think is wrong with it?

This module does not seem to be getting a lot of attention lately, so I'm
very afraid of introducing some nasty bug that will remain in the code
for years. I will very much appreciate a careful review.

Ernest



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux