Re: [PATCH 1/2] hfsplus: update timestamps on truncate()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2018-10-15 at 22:24 +0100, Al Viro wrote:
> On Mon, Oct 15, 2018 at 02:02:01PM -0700, Viacheslav Dubeyko wrote:
> > > 1) unlink() finds the victim and locks it
> > > 
> > > 2) in hfsplus_file_release():
> > >         if (atomic_dec_and_test(&HFSPLUS_I(inode)->opencnt)) {
> > > got to 0
> > >                 inode_lock(inode);
> > > block waiting for unlink
> > > 
> > > 3) open() finds the sucker in dcache and hits hfsplus_file_open(), where
> > > we do
> > >         atomic_inc(&HFSPLUS_I(inode)->opencnt);
> > > and now opencnt is 1.
> > > 
> > > 4) on the unlink side:
> > >         if (inode->i_ino == cnid &&
> > >             atomic_read(&HFSPLUS_I(inode)->opencnt)) {
> > >                 str.name = name;
> > >                 str.len = sprintf(name, "temp%lu", inode->i_ino);
> > >                 res = hfsplus_rename_cat(inode->i_ino,
> > >                                          dir, &dentry->d_name,
> > >                                          sbi->hidden_dir, &str);
> > >                 if (!res) {
> > >                         inode->i_flags |= S_DEAD;
> > >                         drop_nlink(inode);
> > >                 }
> > >                 goto out;
> > >         }
> > > nlink is zero now, the sucker got renamed and marked S_DEAD
> > > 
> > > 5) ->release() finally got through inode_lock() and
> > >                 hfsplus_file_truncate(inode);
> > >                 if (inode->i_flags & S_DEAD) {
> > >                         hfsplus_delete_cat(inode->i_ino,
> > >                                            HFSPLUS_SB(sb)->hidden_dir, NULL);
> > >                         hfsplus_delete_inode(inode);
> > >                 }
> > >                 inode_unlock(inode);
> > > ... and now we have killed everything we used to have associated with that
> > > inode on disk.  While it's still open.  What's to stop CNID to be reused,
> > > etc. and what's to preserve sanity in that situation?
> > >

The hfsplus_link() increments CNID always [1] and never tries to reuse
the same CNID. HFS+ specification [2] declares that CNID can be to wrap
around and be reused - "HFS Plus volumes now allow CNID values to wrap
around and be reused. The kHFSCatalogNodeIDsReusedBit in the attributes
field of the volume header is set to indicate when CNID values have
wrapped around and been reused. When kHFSCatalogNodeIDsReusedBit is set,
the nextCatalogID field is no longer required to be greater than any
existing CNID. When kHFSCatalogNodeIDsReusedBit is set, nextCatalogID
may still be used as a hint for the CNID to assign to newly created
files or directories, but the implementation must verify that CNID is
not currently in use (and pick another value if it is in use). When CNID
number nextCatalogID is already in use, an implementation could just
increment nextCatalogID until it finds a CNID that is not in use. If
nextCatalogID overflows to zero, kHFSCatalogNodeIDsReusedBit must be set
and nextCatalogID set to kHFSFirstUserCatalogNodeID (to avoid using any
reserved CNID values)". However, HFS+ driver in Linux kernel believes
that u32 type is so huge that never will be exhausted.

I try to show the whole picture:

1) unlink() finds the victim and locks it.

2) in hfsplus_file_release():
         if (atomic_dec_and_test(&HFSPLUS_I(inode)->opencnt)) {
 got to 0
                 inode_lock(inode);
 block waiting for unlink

3) open() finds the sucker in dcache and hits hfsplus_file_open(), where
 we do
         atomic_inc(&HFSPLUS_I(inode)->opencnt);
 and now opencnt is 1.

4) on the unlink side:
if opencnt > 0 then we rename the file by temp<inode_id>, move into the
hidden_dir, mark as S_DEAD and go out. It means that if file is opened
currently then we cannot simply delete the file. The file converted in
the temporary state and the responsibility to delete the file is
delegated to the thread that opened the file during hfsplus_release(). 

5) ->release() finally got through inode_lock()
It looks like that hfsplus_release() has to analyze opencnt more
properly. I suppose that it should look something like this:

    inode_lock(inode);
    mutex_lock(&sbi_vh_mutex);

    if (atomic_dec_and_test(&HFSPLUS_I(inode)->opencnt)) {
         hfsplus_file_truncate(inode);
         if (inode->i_flags & S_DEAD) {
             hfsplus_delete_cat(inode->i_ino,
                                HFSPLUS_SB(sb)->hidden_dir, NULL);
             hfsplus_delete_inode(inode);
         }
    }

    mutex_unlock(&sbi_vh_mutex);
    inode_unlock(inode);

What do you think? Do I miss something?

Thanks,
Vyacheslav Dubeyko.

[1] https://elixir.bootlin.com/linux/latest/source/fs/hfsplus/dir.c#L342
[2] https://developer.apple.com/library/archive/technotes/tn/tn1150.html





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux